Nov 24 11:30:16 crc systemd[1]: Starting Kubernetes Kubelet... Nov 24 11:30:16 crc restorecon[4646]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:16 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:30:17 crc restorecon[4646]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:30:17 crc restorecon[4646]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 24 11:30:17 crc kubenswrapper[4789]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 11:30:17 crc kubenswrapper[4789]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 24 11:30:17 crc kubenswrapper[4789]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 11:30:17 crc kubenswrapper[4789]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 11:30:17 crc kubenswrapper[4789]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 24 11:30:17 crc kubenswrapper[4789]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.885845 4789 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.890954 4789 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.890988 4789 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.890999 4789 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891010 4789 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891021 4789 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891031 4789 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891040 4789 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891048 4789 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891056 4789 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891064 4789 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891071 4789 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891079 4789 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891087 4789 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891095 4789 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891103 4789 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891111 4789 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891118 4789 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891126 4789 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891134 4789 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891144 4789 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891154 4789 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891162 4789 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891170 4789 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891178 4789 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891186 4789 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891197 4789 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891207 4789 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891216 4789 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891224 4789 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891233 4789 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891241 4789 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891248 4789 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891270 4789 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891278 4789 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891286 4789 feature_gate.go:330] unrecognized feature gate: Example Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891294 4789 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891301 4789 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891309 4789 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891317 4789 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891325 4789 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891333 4789 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891340 4789 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891348 4789 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891355 4789 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891363 4789 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891370 4789 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891378 4789 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891385 4789 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891393 4789 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891401 4789 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891408 4789 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891416 4789 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891423 4789 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891430 4789 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891438 4789 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891446 4789 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891485 4789 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891497 4789 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891508 4789 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891518 4789 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891531 4789 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891541 4789 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891550 4789 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891559 4789 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891569 4789 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891577 4789 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891586 4789 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891593 4789 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891603 4789 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891611 4789 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.891619 4789 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.892733 4789 flags.go:64] FLAG: --address="0.0.0.0" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.892766 4789 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.892790 4789 flags.go:64] FLAG: --anonymous-auth="true" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.892807 4789 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.892821 4789 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.892834 4789 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.892850 4789 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.892864 4789 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.892875 4789 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.892886 4789 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.892900 4789 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.892913 4789 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.892926 4789 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.892938 4789 flags.go:64] FLAG: --cgroup-root="" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.892950 4789 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.892962 4789 flags.go:64] FLAG: --client-ca-file="" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.892974 4789 flags.go:64] FLAG: --cloud-config="" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.892985 4789 flags.go:64] FLAG: --cloud-provider="" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.892996 4789 flags.go:64] FLAG: --cluster-dns="[]" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893011 4789 flags.go:64] FLAG: --cluster-domain="" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893023 4789 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893034 4789 flags.go:64] FLAG: --config-dir="" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893046 4789 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893059 4789 flags.go:64] FLAG: --container-log-max-files="5" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893074 4789 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893086 4789 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893098 4789 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893110 4789 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893127 4789 flags.go:64] FLAG: --contention-profiling="false" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893139 4789 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893154 4789 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893168 4789 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893179 4789 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893194 4789 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893207 4789 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893219 4789 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893233 4789 flags.go:64] FLAG: --enable-load-reader="false" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893246 4789 flags.go:64] FLAG: --enable-server="true" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893259 4789 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893276 4789 flags.go:64] FLAG: --event-burst="100" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893289 4789 flags.go:64] FLAG: --event-qps="50" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893301 4789 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893312 4789 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893323 4789 flags.go:64] FLAG: --eviction-hard="" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893337 4789 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893392 4789 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893404 4789 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893414 4789 flags.go:64] FLAG: --eviction-soft="" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893423 4789 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893432 4789 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893442 4789 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893451 4789 flags.go:64] FLAG: --experimental-mounter-path="" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893492 4789 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893501 4789 flags.go:64] FLAG: --fail-swap-on="true" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893510 4789 flags.go:64] FLAG: --feature-gates="" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893521 4789 flags.go:64] FLAG: --file-check-frequency="20s" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893530 4789 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893541 4789 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893549 4789 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893559 4789 flags.go:64] FLAG: --healthz-port="10248" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893568 4789 flags.go:64] FLAG: --help="false" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893577 4789 flags.go:64] FLAG: --hostname-override="" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893587 4789 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893596 4789 flags.go:64] FLAG: --http-check-frequency="20s" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893605 4789 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893615 4789 flags.go:64] FLAG: --image-credential-provider-config="" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893626 4789 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893635 4789 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893644 4789 flags.go:64] FLAG: --image-service-endpoint="" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893653 4789 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893662 4789 flags.go:64] FLAG: --kube-api-burst="100" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893671 4789 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893682 4789 flags.go:64] FLAG: --kube-api-qps="50" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893690 4789 flags.go:64] FLAG: --kube-reserved="" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893699 4789 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893708 4789 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893717 4789 flags.go:64] FLAG: --kubelet-cgroups="" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893726 4789 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893735 4789 flags.go:64] FLAG: --lock-file="" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893744 4789 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893753 4789 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893762 4789 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893775 4789 flags.go:64] FLAG: --log-json-split-stream="false" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893784 4789 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893793 4789 flags.go:64] FLAG: --log-text-split-stream="false" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893802 4789 flags.go:64] FLAG: --logging-format="text" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893811 4789 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893820 4789 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893830 4789 flags.go:64] FLAG: --manifest-url="" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893839 4789 flags.go:64] FLAG: --manifest-url-header="" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893851 4789 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893860 4789 flags.go:64] FLAG: --max-open-files="1000000" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893871 4789 flags.go:64] FLAG: --max-pods="110" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893881 4789 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893890 4789 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893899 4789 flags.go:64] FLAG: --memory-manager-policy="None" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893908 4789 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893918 4789 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893926 4789 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893936 4789 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893956 4789 flags.go:64] FLAG: --node-status-max-images="50" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893965 4789 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893975 4789 flags.go:64] FLAG: --oom-score-adj="-999" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893984 4789 flags.go:64] FLAG: --pod-cidr="" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.893994 4789 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894007 4789 flags.go:64] FLAG: --pod-manifest-path="" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894016 4789 flags.go:64] FLAG: --pod-max-pids="-1" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894025 4789 flags.go:64] FLAG: --pods-per-core="0" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894034 4789 flags.go:64] FLAG: --port="10250" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894044 4789 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894052 4789 flags.go:64] FLAG: --provider-id="" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894061 4789 flags.go:64] FLAG: --qos-reserved="" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894070 4789 flags.go:64] FLAG: --read-only-port="10255" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894078 4789 flags.go:64] FLAG: --register-node="true" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894088 4789 flags.go:64] FLAG: --register-schedulable="true" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894097 4789 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894112 4789 flags.go:64] FLAG: --registry-burst="10" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894121 4789 flags.go:64] FLAG: --registry-qps="5" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894129 4789 flags.go:64] FLAG: --reserved-cpus="" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894138 4789 flags.go:64] FLAG: --reserved-memory="" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894149 4789 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894158 4789 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894167 4789 flags.go:64] FLAG: --rotate-certificates="false" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894176 4789 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894185 4789 flags.go:64] FLAG: --runonce="false" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894194 4789 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894203 4789 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894213 4789 flags.go:64] FLAG: --seccomp-default="false" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894222 4789 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894231 4789 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894240 4789 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894250 4789 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894260 4789 flags.go:64] FLAG: --storage-driver-password="root" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894268 4789 flags.go:64] FLAG: --storage-driver-secure="false" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894277 4789 flags.go:64] FLAG: --storage-driver-table="stats" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894286 4789 flags.go:64] FLAG: --storage-driver-user="root" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894295 4789 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894305 4789 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894315 4789 flags.go:64] FLAG: --system-cgroups="" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894324 4789 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894340 4789 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894349 4789 flags.go:64] FLAG: --tls-cert-file="" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894357 4789 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894368 4789 flags.go:64] FLAG: --tls-min-version="" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894376 4789 flags.go:64] FLAG: --tls-private-key-file="" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894385 4789 flags.go:64] FLAG: --topology-manager-policy="none" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894394 4789 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894403 4789 flags.go:64] FLAG: --topology-manager-scope="container" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894413 4789 flags.go:64] FLAG: --v="2" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894424 4789 flags.go:64] FLAG: --version="false" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894436 4789 flags.go:64] FLAG: --vmodule="" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894479 4789 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.894493 4789 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.894716 4789 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.894727 4789 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.894737 4789 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.894745 4789 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.894753 4789 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.894761 4789 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.894770 4789 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.894778 4789 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.894786 4789 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.894794 4789 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.894801 4789 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.894812 4789 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.894821 4789 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.894830 4789 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.894838 4789 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.894845 4789 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.894853 4789 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.894861 4789 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.894869 4789 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.894879 4789 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.894889 4789 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.894898 4789 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.894907 4789 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.894916 4789 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.894924 4789 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.894938 4789 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.894946 4789 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.894954 4789 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.894962 4789 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.894970 4789 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.894978 4789 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.894985 4789 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.894993 4789 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.895000 4789 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.895008 4789 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.895016 4789 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.895023 4789 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.895031 4789 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.895041 4789 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.895050 4789 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.895057 4789 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.895065 4789 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.895072 4789 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.895079 4789 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.895087 4789 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.895095 4789 feature_gate.go:330] unrecognized feature gate: Example Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.895102 4789 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.895110 4789 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.895117 4789 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.895125 4789 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.895134 4789 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.895141 4789 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.895149 4789 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.895156 4789 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.895164 4789 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.895174 4789 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.895183 4789 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.895196 4789 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.895203 4789 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.895211 4789 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.895219 4789 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.895229 4789 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.895239 4789 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.895247 4789 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.895255 4789 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.895263 4789 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.895271 4789 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.895278 4789 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.895286 4789 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.895294 4789 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.895309 4789 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.895334 4789 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.907757 4789 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.907801 4789 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.907916 4789 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.907927 4789 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.907937 4789 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.907945 4789 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.907955 4789 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.907963 4789 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.907971 4789 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.907980 4789 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.907988 4789 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.907996 4789 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908005 4789 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908013 4789 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908021 4789 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908029 4789 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908037 4789 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908044 4789 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908053 4789 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908061 4789 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908068 4789 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908076 4789 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908084 4789 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908092 4789 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908099 4789 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908107 4789 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908115 4789 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908123 4789 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908131 4789 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908140 4789 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908148 4789 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908158 4789 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908169 4789 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908179 4789 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908187 4789 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908196 4789 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908204 4789 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908212 4789 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908223 4789 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908234 4789 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908244 4789 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908253 4789 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908261 4789 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908269 4789 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908277 4789 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908285 4789 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908292 4789 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908300 4789 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908308 4789 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908315 4789 feature_gate.go:330] unrecognized feature gate: Example Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908323 4789 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908331 4789 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908339 4789 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908346 4789 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908354 4789 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908362 4789 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908370 4789 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908378 4789 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908388 4789 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908398 4789 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908406 4789 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908416 4789 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908424 4789 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908432 4789 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908439 4789 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908448 4789 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908490 4789 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908501 4789 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908511 4789 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908520 4789 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908527 4789 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908535 4789 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908544 4789 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.908558 4789 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908820 4789 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908833 4789 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908844 4789 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908852 4789 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908860 4789 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908868 4789 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908876 4789 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908885 4789 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908894 4789 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908901 4789 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908909 4789 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908918 4789 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908925 4789 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908934 4789 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908941 4789 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908949 4789 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908957 4789 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908967 4789 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908976 4789 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908985 4789 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.908993 4789 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909000 4789 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909008 4789 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909016 4789 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909024 4789 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909032 4789 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909039 4789 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909047 4789 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909055 4789 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909064 4789 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909072 4789 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909080 4789 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909088 4789 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909096 4789 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909106 4789 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909113 4789 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909121 4789 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909129 4789 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909136 4789 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909144 4789 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909152 4789 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909159 4789 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909167 4789 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909175 4789 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909185 4789 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909195 4789 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909204 4789 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909213 4789 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909220 4789 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909229 4789 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909236 4789 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909243 4789 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909254 4789 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909263 4789 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909273 4789 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909281 4789 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909289 4789 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909297 4789 feature_gate.go:330] unrecognized feature gate: Example Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909305 4789 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909313 4789 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909324 4789 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909334 4789 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909342 4789 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909350 4789 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909358 4789 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909366 4789 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909374 4789 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909381 4789 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909389 4789 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909397 4789 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 11:30:17 crc kubenswrapper[4789]: W1124 11:30:17.909406 4789 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.909417 4789 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.910641 4789 server.go:940] "Client rotation is on, will bootstrap in background" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.916769 4789 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.916913 4789 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.924091 4789 server.go:997] "Starting client certificate rotation" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.924162 4789 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.924309 4789 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-13 20:59:45.274743101 +0000 UTC Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.924377 4789 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 1209h29m27.350369164s for next certificate rotation Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.957628 4789 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.961070 4789 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 24 11:30:17 crc kubenswrapper[4789]: I1124 11:30:17.983953 4789 log.go:25] "Validated CRI v1 runtime API" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.020868 4789 log.go:25] "Validated CRI v1 image API" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.023006 4789 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.031635 4789 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-24-11-24-36-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.031694 4789 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.053352 4789 manager.go:217] Machine: {Timestamp:2025-11-24 11:30:18.050348292 +0000 UTC m=+0.632819751 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2799998 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:48941845-60e3-4de0-ba49-51eec51285bb BootID:4376b485-9285-482b-9f4e-acdea532ff82 Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:4d:f7:d7 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:4d:f7:d7 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:c8:fc:72 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:8e:b2:2e Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:43:46:86 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:79:45:83 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:be:51:c0 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:f6:56:25:64:f5:06 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:56:18:61:4f:e2:5b Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.053738 4789 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.053980 4789 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.057529 4789 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.058428 4789 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.058541 4789 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.058916 4789 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.058936 4789 container_manager_linux.go:303] "Creating device plugin manager" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.059667 4789 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.059725 4789 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.060449 4789 state_mem.go:36] "Initialized new in-memory state store" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.060629 4789 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.065255 4789 kubelet.go:418] "Attempting to sync node with API server" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.065298 4789 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.065340 4789 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.065362 4789 kubelet.go:324] "Adding apiserver pod source" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.065380 4789 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.070586 4789 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.071765 4789 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.075176 4789 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 24 11:30:18 crc kubenswrapper[4789]: W1124 11:30:18.075314 4789 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Nov 24 11:30:18 crc kubenswrapper[4789]: E1124 11:30:18.075412 4789 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:30:18 crc kubenswrapper[4789]: W1124 11:30:18.075449 4789 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Nov 24 11:30:18 crc kubenswrapper[4789]: E1124 11:30:18.075580 4789 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.077110 4789 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.077154 4789 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.077170 4789 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.077184 4789 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.077205 4789 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.077219 4789 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.077232 4789 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.077280 4789 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.077296 4789 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.077309 4789 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.077357 4789 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.077375 4789 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.078670 4789 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.079998 4789 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.080070 4789 server.go:1280] "Started kubelet" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.081108 4789 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.081691 4789 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.081927 4789 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 11:30:18 crc systemd[1]: Started Kubernetes Kubelet. Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.082781 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.082861 4789 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.082994 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 16:45:34.932949365 +0000 UTC Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.083044 4789 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 341h15m16.849909314s for next certificate rotation Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.083101 4789 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.083115 4789 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 24 11:30:18 crc kubenswrapper[4789]: E1124 11:30:18.083129 4789 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.086685 4789 server.go:460] "Adding debug handlers to kubelet server" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.088059 4789 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.088081 4789 factory.go:55] Registering systemd factory Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.088094 4789 factory.go:221] Registration of the systemd container factory successfully Nov 24 11:30:18 crc kubenswrapper[4789]: W1124 11:30:18.087951 4789 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Nov 24 11:30:18 crc kubenswrapper[4789]: E1124 11:30:18.088131 4789 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.088190 4789 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.088956 4789 factory.go:153] Registering CRI-O factory Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.088973 4789 factory.go:221] Registration of the crio container factory successfully Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.088998 4789 factory.go:103] Registering Raw factory Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.089018 4789 manager.go:1196] Started watching for new ooms in manager Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.089585 4789 manager.go:319] Starting recovery of all containers Nov 24 11:30:18 crc kubenswrapper[4789]: E1124 11:30:18.090414 4789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" interval="200ms" Nov 24 11:30:18 crc kubenswrapper[4789]: E1124 11:30:18.089589 4789 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.184:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187aedefc22a31c3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-24 11:30:18.079482307 +0000 UTC m=+0.661953726,LastTimestamp:2025-11-24 11:30:18.079482307 +0000 UTC m=+0.661953726,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.121299 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.121377 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.121439 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.121453 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.121506 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.121520 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.121535 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.121547 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.121562 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.121573 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.121585 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.121599 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.121612 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.121628 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.121647 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.121663 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.121677 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.121708 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.121726 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.121740 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.121755 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.121769 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.121783 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.121797 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.121812 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.121824 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.121841 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.121863 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.121877 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.121891 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.121906 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.121921 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.121935 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.121969 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.121983 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.121995 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.122006 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.122020 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.122042 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.122058 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.122071 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.122082 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.124966 4789 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125010 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125030 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125044 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125056 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125070 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125082 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125095 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125110 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125126 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125140 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125157 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125170 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125185 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125199 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125211 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125222 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125234 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125245 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125257 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125268 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125282 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125301 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125313 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125326 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125339 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125352 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125365 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125377 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125392 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125442 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125502 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125519 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125530 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125543 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125556 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125568 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125581 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125599 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125612 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125625 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125639 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125652 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125664 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125676 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125687 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125698 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125710 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125723 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125735 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125746 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125779 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125793 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125806 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125819 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125832 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125849 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125862 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125873 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125885 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125899 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125912 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125924 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125942 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125955 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125967 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.125981 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126001 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126014 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126029 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126042 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126055 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126069 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126083 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126097 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126108 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126120 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126132 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126147 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126159 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126173 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126186 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126198 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126218 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126236 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126248 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126264 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126276 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126288 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126301 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126313 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126326 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126340 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126354 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126366 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126378 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126391 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126404 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126417 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126431 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126443 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126475 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126489 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126501 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126514 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126527 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126539 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126551 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126563 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126576 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126589 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126601 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126613 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126626 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126644 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126658 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126670 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126684 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126697 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126710 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126723 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126735 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126748 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126761 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126774 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126787 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126801 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126817 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126830 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126842 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126855 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126866 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126880 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126897 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126910 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126923 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126936 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126950 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126968 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126981 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.126994 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.127006 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.127017 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.127029 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.127043 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.127059 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.127071 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.127087 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.127100 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.127114 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.127127 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.127141 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.127155 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.127167 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.127188 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.127200 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.127213 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.127224 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.127238 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.127250 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.127263 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.127276 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.127290 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.127301 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.127315 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.127327 4789 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.127339 4789 reconstruct.go:97] "Volume reconstruction finished" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.127348 4789 reconciler.go:26] "Reconciler: start to sync state" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.140414 4789 manager.go:324] Recovery completed Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.155060 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.156813 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.156862 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.156878 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.157796 4789 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.157822 4789 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.157850 4789 state_mem.go:36] "Initialized new in-memory state store" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.165413 4789 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.167883 4789 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.167920 4789 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.167952 4789 kubelet.go:2335] "Starting kubelet main sync loop" Nov 24 11:30:18 crc kubenswrapper[4789]: E1124 11:30:18.167993 4789 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 11:30:18 crc kubenswrapper[4789]: W1124 11:30:18.168805 4789 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Nov 24 11:30:18 crc kubenswrapper[4789]: E1124 11:30:18.169006 4789 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.176381 4789 policy_none.go:49] "None policy: Start" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.177122 4789 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.177146 4789 state_mem.go:35] "Initializing new in-memory state store" Nov 24 11:30:18 crc kubenswrapper[4789]: E1124 11:30:18.183513 4789 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.232302 4789 manager.go:334] "Starting Device Plugin manager" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.232348 4789 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.232362 4789 server.go:79] "Starting device plugin registration server" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.232779 4789 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.232796 4789 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.233380 4789 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.233559 4789 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.233584 4789 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 11:30:18 crc kubenswrapper[4789]: E1124 11:30:18.240142 4789 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.268438 4789 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.268582 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.269917 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.269958 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.269968 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.270072 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.270327 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.270407 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.270768 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.270805 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.270817 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.270958 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.271191 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.272037 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.272308 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.272370 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.272412 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.274005 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.274043 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.274069 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.274367 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.274428 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.274510 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.274534 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.274767 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.274821 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.276056 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.276134 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.276172 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.276183 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.276223 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.276241 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.276511 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.277856 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.277931 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.278450 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.278536 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.278650 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.278846 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.278893 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.278907 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.278950 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.278995 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.280345 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.280390 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.280408 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:18 crc kubenswrapper[4789]: E1124 11:30:18.302617 4789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" interval="400ms" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.330154 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.330209 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.330239 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.330259 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.330283 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.330347 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.330384 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.330428 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.330546 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.330590 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.330629 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.330651 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.330697 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.330746 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.330782 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.333226 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.334376 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.334413 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.334426 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.334504 4789 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 11:30:18 crc kubenswrapper[4789]: E1124 11:30:18.335079 4789 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.184:6443: connect: connection refused" node="crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.432512 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.432619 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.432665 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.432690 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.432829 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.432878 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.432899 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.432845 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.432837 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.432716 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.433075 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.433105 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.433184 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.433133 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.433220 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.433254 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.433350 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.433374 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.433390 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.433397 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.433303 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.433421 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.433498 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.433519 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.433308 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.433564 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.433576 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.433601 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.433647 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.433682 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.535242 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.537225 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.537287 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.537304 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.537341 4789 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 11:30:18 crc kubenswrapper[4789]: E1124 11:30:18.537839 4789 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.184:6443: connect: connection refused" node="crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.613990 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.639795 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.665368 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.676897 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.680244 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:30:18 crc kubenswrapper[4789]: W1124 11:30:18.692694 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-2c7a1436a79b125aead033a980c653c9103c24ba4be4f4d2243bb18f5711755a WatchSource:0}: Error finding container 2c7a1436a79b125aead033a980c653c9103c24ba4be4f4d2243bb18f5711755a: Status 404 returned error can't find the container with id 2c7a1436a79b125aead033a980c653c9103c24ba4be4f4d2243bb18f5711755a Nov 24 11:30:18 crc kubenswrapper[4789]: W1124 11:30:18.695865 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-87e569f528d6cb13a80e6ca94c5059678b72226311d2eec4d6bff71359e3c831 WatchSource:0}: Error finding container 87e569f528d6cb13a80e6ca94c5059678b72226311d2eec4d6bff71359e3c831: Status 404 returned error can't find the container with id 87e569f528d6cb13a80e6ca94c5059678b72226311d2eec4d6bff71359e3c831 Nov 24 11:30:18 crc kubenswrapper[4789]: E1124 11:30:18.704318 4789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" interval="800ms" Nov 24 11:30:18 crc kubenswrapper[4789]: W1124 11:30:18.706845 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-2af130505624c2c14d2628efb1b8565215bd7a08db7ab996a621041ceedd2432 WatchSource:0}: Error finding container 2af130505624c2c14d2628efb1b8565215bd7a08db7ab996a621041ceedd2432: Status 404 returned error can't find the container with id 2af130505624c2c14d2628efb1b8565215bd7a08db7ab996a621041ceedd2432 Nov 24 11:30:18 crc kubenswrapper[4789]: W1124 11:30:18.716412 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-f68a244ebaf7fe2f5dc3ed618563aa11118bf1280d1955ffdd9c3666e9964d97 WatchSource:0}: Error finding container f68a244ebaf7fe2f5dc3ed618563aa11118bf1280d1955ffdd9c3666e9964d97: Status 404 returned error can't find the container with id f68a244ebaf7fe2f5dc3ed618563aa11118bf1280d1955ffdd9c3666e9964d97 Nov 24 11:30:18 crc kubenswrapper[4789]: W1124 11:30:18.718365 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-f6b9e3e38889fca49de6775266f048c729f127447228be273629e98d0090a8ec WatchSource:0}: Error finding container f6b9e3e38889fca49de6775266f048c729f127447228be273629e98d0090a8ec: Status 404 returned error can't find the container with id f6b9e3e38889fca49de6775266f048c729f127447228be273629e98d0090a8ec Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.938614 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.940045 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.940192 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.940254 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:18 crc kubenswrapper[4789]: I1124 11:30:18.940335 4789 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 11:30:18 crc kubenswrapper[4789]: E1124 11:30:18.941292 4789 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.184:6443: connect: connection refused" node="crc" Nov 24 11:30:18 crc kubenswrapper[4789]: W1124 11:30:18.958063 4789 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Nov 24 11:30:18 crc kubenswrapper[4789]: E1124 11:30:18.958150 4789 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:30:19 crc kubenswrapper[4789]: I1124 11:30:19.080846 4789 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Nov 24 11:30:19 crc kubenswrapper[4789]: I1124 11:30:19.173288 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f68a244ebaf7fe2f5dc3ed618563aa11118bf1280d1955ffdd9c3666e9964d97"} Nov 24 11:30:19 crc kubenswrapper[4789]: I1124 11:30:19.174508 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2af130505624c2c14d2628efb1b8565215bd7a08db7ab996a621041ceedd2432"} Nov 24 11:30:19 crc kubenswrapper[4789]: I1124 11:30:19.175339 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"87e569f528d6cb13a80e6ca94c5059678b72226311d2eec4d6bff71359e3c831"} Nov 24 11:30:19 crc kubenswrapper[4789]: I1124 11:30:19.176628 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"2c7a1436a79b125aead033a980c653c9103c24ba4be4f4d2243bb18f5711755a"} Nov 24 11:30:19 crc kubenswrapper[4789]: I1124 11:30:19.177553 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f6b9e3e38889fca49de6775266f048c729f127447228be273629e98d0090a8ec"} Nov 24 11:30:19 crc kubenswrapper[4789]: W1124 11:30:19.333168 4789 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Nov 24 11:30:19 crc kubenswrapper[4789]: E1124 11:30:19.333546 4789 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:30:19 crc kubenswrapper[4789]: W1124 11:30:19.358042 4789 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Nov 24 11:30:19 crc kubenswrapper[4789]: E1124 11:30:19.358105 4789 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:30:19 crc kubenswrapper[4789]: E1124 11:30:19.505683 4789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" interval="1.6s" Nov 24 11:30:19 crc kubenswrapper[4789]: W1124 11:30:19.739825 4789 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Nov 24 11:30:19 crc kubenswrapper[4789]: E1124 11:30:19.739904 4789 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:30:19 crc kubenswrapper[4789]: I1124 11:30:19.742181 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:19 crc kubenswrapper[4789]: I1124 11:30:19.743703 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:19 crc kubenswrapper[4789]: I1124 11:30:19.743733 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:19 crc kubenswrapper[4789]: I1124 11:30:19.743741 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:19 crc kubenswrapper[4789]: I1124 11:30:19.743762 4789 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 11:30:19 crc kubenswrapper[4789]: E1124 11:30:19.744132 4789 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.184:6443: connect: connection refused" node="crc" Nov 24 11:30:20 crc kubenswrapper[4789]: I1124 11:30:20.081608 4789 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Nov 24 11:30:20 crc kubenswrapper[4789]: I1124 11:30:20.183621 4789 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="136df7849a013cb5393a500a40fcbe252deae349ad3c0d1dbc4f7926c01ff528" exitCode=0 Nov 24 11:30:20 crc kubenswrapper[4789]: I1124 11:30:20.183716 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"136df7849a013cb5393a500a40fcbe252deae349ad3c0d1dbc4f7926c01ff528"} Nov 24 11:30:20 crc kubenswrapper[4789]: I1124 11:30:20.183771 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:20 crc kubenswrapper[4789]: I1124 11:30:20.185554 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:20 crc kubenswrapper[4789]: I1124 11:30:20.185600 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:20 crc kubenswrapper[4789]: I1124 11:30:20.185616 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:20 crc kubenswrapper[4789]: I1124 11:30:20.188819 4789 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054" exitCode=0 Nov 24 11:30:20 crc kubenswrapper[4789]: I1124 11:30:20.188988 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054"} Nov 24 11:30:20 crc kubenswrapper[4789]: I1124 11:30:20.189004 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:20 crc kubenswrapper[4789]: I1124 11:30:20.190381 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:20 crc kubenswrapper[4789]: I1124 11:30:20.190433 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:20 crc kubenswrapper[4789]: I1124 11:30:20.190485 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:20 crc kubenswrapper[4789]: I1124 11:30:20.191707 4789 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d4f6f60ca84abcd292bb2b9a6ebe47edbc35a78c6e2dca16a18964f11bbb9f80" exitCode=0 Nov 24 11:30:20 crc kubenswrapper[4789]: I1124 11:30:20.191827 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:20 crc kubenswrapper[4789]: I1124 11:30:20.191831 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d4f6f60ca84abcd292bb2b9a6ebe47edbc35a78c6e2dca16a18964f11bbb9f80"} Nov 24 11:30:20 crc kubenswrapper[4789]: I1124 11:30:20.193030 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:20 crc kubenswrapper[4789]: I1124 11:30:20.193626 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:20 crc kubenswrapper[4789]: I1124 11:30:20.193687 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:20 crc kubenswrapper[4789]: I1124 11:30:20.193704 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:20 crc kubenswrapper[4789]: I1124 11:30:20.194433 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:20 crc kubenswrapper[4789]: I1124 11:30:20.194538 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:20 crc kubenswrapper[4789]: I1124 11:30:20.194562 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:20 crc kubenswrapper[4789]: I1124 11:30:20.196032 4789 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="54d4c69ca57fd2625092ab049c4cf09c515edaedf5219818d8b86d1405fbf9f5" exitCode=0 Nov 24 11:30:20 crc kubenswrapper[4789]: I1124 11:30:20.196106 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:20 crc kubenswrapper[4789]: I1124 11:30:20.196118 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"54d4c69ca57fd2625092ab049c4cf09c515edaedf5219818d8b86d1405fbf9f5"} Nov 24 11:30:20 crc kubenswrapper[4789]: I1124 11:30:20.197272 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:20 crc kubenswrapper[4789]: I1124 11:30:20.197308 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:20 crc kubenswrapper[4789]: I1124 11:30:20.197321 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:20 crc kubenswrapper[4789]: I1124 11:30:20.201650 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b"} Nov 24 11:30:20 crc kubenswrapper[4789]: I1124 11:30:20.201688 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983"} Nov 24 11:30:20 crc kubenswrapper[4789]: I1124 11:30:20.201707 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11"} Nov 24 11:30:20 crc kubenswrapper[4789]: I1124 11:30:20.201724 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19"} Nov 24 11:30:20 crc kubenswrapper[4789]: I1124 11:30:20.201712 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:20 crc kubenswrapper[4789]: I1124 11:30:20.202871 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:20 crc kubenswrapper[4789]: I1124 11:30:20.202903 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:20 crc kubenswrapper[4789]: I1124 11:30:20.202920 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:21 crc kubenswrapper[4789]: I1124 11:30:21.081347 4789 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Nov 24 11:30:21 crc kubenswrapper[4789]: E1124 11:30:21.106708 4789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.184:6443: connect: connection refused" interval="3.2s" Nov 24 11:30:21 crc kubenswrapper[4789]: I1124 11:30:21.206332 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8847f098f36612e1b18e6fa7e9d3ecd32ae6a0aef704d6ed7e06f9115d993bf9"} Nov 24 11:30:21 crc kubenswrapper[4789]: I1124 11:30:21.206372 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d011633bdece1cc331c96ab10bafee76ec769fdad2e60b09b2224ad3cf655395"} Nov 24 11:30:21 crc kubenswrapper[4789]: I1124 11:30:21.206383 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"eb0303ba3fd943ad92e8cffb4d8322537a9115a81f2d714c22eed182bc8a90a2"} Nov 24 11:30:21 crc kubenswrapper[4789]: I1124 11:30:21.206484 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:21 crc kubenswrapper[4789]: I1124 11:30:21.207159 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:21 crc kubenswrapper[4789]: I1124 11:30:21.207180 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:21 crc kubenswrapper[4789]: I1124 11:30:21.207187 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:21 crc kubenswrapper[4789]: I1124 11:30:21.209952 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1fd6d65d4251753aa6ff29e27cd70770dc5f08eb51cc717f789e65ac4a3ac7ba"} Nov 24 11:30:21 crc kubenswrapper[4789]: I1124 11:30:21.209977 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57"} Nov 24 11:30:21 crc kubenswrapper[4789]: I1124 11:30:21.209988 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe"} Nov 24 11:30:21 crc kubenswrapper[4789]: I1124 11:30:21.209997 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87"} Nov 24 11:30:21 crc kubenswrapper[4789]: I1124 11:30:21.210006 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c"} Nov 24 11:30:21 crc kubenswrapper[4789]: I1124 11:30:21.210104 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:21 crc kubenswrapper[4789]: I1124 11:30:21.210716 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:21 crc kubenswrapper[4789]: I1124 11:30:21.210774 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:21 crc kubenswrapper[4789]: I1124 11:30:21.210787 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:21 crc kubenswrapper[4789]: I1124 11:30:21.212623 4789 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f94b7977ee2f28707dc7504b48ff515227ff36acbe11dd5d3bdcd0ed57aeedbe" exitCode=0 Nov 24 11:30:21 crc kubenswrapper[4789]: I1124 11:30:21.212703 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f94b7977ee2f28707dc7504b48ff515227ff36acbe11dd5d3bdcd0ed57aeedbe"} Nov 24 11:30:21 crc kubenswrapper[4789]: I1124 11:30:21.212750 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:21 crc kubenswrapper[4789]: I1124 11:30:21.213490 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:21 crc kubenswrapper[4789]: I1124 11:30:21.213522 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:21 crc kubenswrapper[4789]: I1124 11:30:21.213535 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:21 crc kubenswrapper[4789]: I1124 11:30:21.214114 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"edb7c8772394f7e4e2a72f2f354cf4b45d4e4ec2c5897c415583c26012e4508e"} Nov 24 11:30:21 crc kubenswrapper[4789]: I1124 11:30:21.214153 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:21 crc kubenswrapper[4789]: I1124 11:30:21.214158 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:21 crc kubenswrapper[4789]: I1124 11:30:21.214712 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:21 crc kubenswrapper[4789]: I1124 11:30:21.214733 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:21 crc kubenswrapper[4789]: I1124 11:30:21.214741 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:21 crc kubenswrapper[4789]: I1124 11:30:21.215131 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:21 crc kubenswrapper[4789]: I1124 11:30:21.215151 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:21 crc kubenswrapper[4789]: I1124 11:30:21.215159 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:21 crc kubenswrapper[4789]: E1124 11:30:21.280217 4789 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.184:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187aedefc22a31c3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-24 11:30:18.079482307 +0000 UTC m=+0.661953726,LastTimestamp:2025-11-24 11:30:18.079482307 +0000 UTC m=+0.661953726,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 24 11:30:21 crc kubenswrapper[4789]: I1124 11:30:21.344342 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:21 crc kubenswrapper[4789]: I1124 11:30:21.345887 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:21 crc kubenswrapper[4789]: I1124 11:30:21.345923 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:21 crc kubenswrapper[4789]: I1124 11:30:21.345934 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:21 crc kubenswrapper[4789]: I1124 11:30:21.345961 4789 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 11:30:21 crc kubenswrapper[4789]: E1124 11:30:21.346364 4789 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.184:6443: connect: connection refused" node="crc" Nov 24 11:30:21 crc kubenswrapper[4789]: W1124 11:30:21.371779 4789 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Nov 24 11:30:21 crc kubenswrapper[4789]: E1124 11:30:21.371851 4789 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:30:21 crc kubenswrapper[4789]: W1124 11:30:21.583138 4789 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.184:6443: connect: connection refused Nov 24 11:30:21 crc kubenswrapper[4789]: E1124 11:30:21.583223 4789 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.184:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:30:22 crc kubenswrapper[4789]: I1124 11:30:22.219507 4789 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="3e256133e9f03ec292381deed3a0d6fd1fd7af957c68de4e8e697c2554749e7e" exitCode=0 Nov 24 11:30:22 crc kubenswrapper[4789]: I1124 11:30:22.219629 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:22 crc kubenswrapper[4789]: I1124 11:30:22.219648 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:22 crc kubenswrapper[4789]: I1124 11:30:22.219771 4789 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 11:30:22 crc kubenswrapper[4789]: I1124 11:30:22.219858 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:22 crc kubenswrapper[4789]: I1124 11:30:22.220174 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"3e256133e9f03ec292381deed3a0d6fd1fd7af957c68de4e8e697c2554749e7e"} Nov 24 11:30:22 crc kubenswrapper[4789]: I1124 11:30:22.220202 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:30:22 crc kubenswrapper[4789]: I1124 11:30:22.220642 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:22 crc kubenswrapper[4789]: I1124 11:30:22.220756 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:22 crc kubenswrapper[4789]: I1124 11:30:22.220789 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:22 crc kubenswrapper[4789]: I1124 11:30:22.220799 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:22 crc kubenswrapper[4789]: I1124 11:30:22.220819 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:22 crc kubenswrapper[4789]: I1124 11:30:22.220843 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:22 crc kubenswrapper[4789]: I1124 11:30:22.220854 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:22 crc kubenswrapper[4789]: I1124 11:30:22.221227 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:22 crc kubenswrapper[4789]: I1124 11:30:22.221268 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:22 crc kubenswrapper[4789]: I1124 11:30:22.221286 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:22 crc kubenswrapper[4789]: I1124 11:30:22.221296 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:22 crc kubenswrapper[4789]: I1124 11:30:22.221272 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:22 crc kubenswrapper[4789]: I1124 11:30:22.221326 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:22 crc kubenswrapper[4789]: I1124 11:30:22.598587 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:30:22 crc kubenswrapper[4789]: I1124 11:30:22.939223 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:30:22 crc kubenswrapper[4789]: I1124 11:30:22.939419 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:22 crc kubenswrapper[4789]: I1124 11:30:22.940887 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:22 crc kubenswrapper[4789]: I1124 11:30:22.940935 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:22 crc kubenswrapper[4789]: I1124 11:30:22.940954 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:22 crc kubenswrapper[4789]: I1124 11:30:22.947069 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:30:23 crc kubenswrapper[4789]: I1124 11:30:23.225445 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"33efc03995365a46065fead4cf9beaa906960b1ab9bcdd94ced09fd679f5890c"} Nov 24 11:30:23 crc kubenswrapper[4789]: I1124 11:30:23.225511 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6334f038ee8252e0bb02bf12486569bfbc82a60847ec3ec6226a64ad1246ac0e"} Nov 24 11:30:23 crc kubenswrapper[4789]: I1124 11:30:23.225522 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:23 crc kubenswrapper[4789]: I1124 11:30:23.225526 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"628bfc626ee18c244b923796552282508faa38433f97381f38198a988b51e3b2"} Nov 24 11:30:23 crc kubenswrapper[4789]: I1124 11:30:23.225581 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:23 crc kubenswrapper[4789]: I1124 11:30:23.225595 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"02a082848a141aadc260aa318616174c3b23a37baabb32398a925ffff66ef0b8"} Nov 24 11:30:23 crc kubenswrapper[4789]: I1124 11:30:23.226383 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:23 crc kubenswrapper[4789]: I1124 11:30:23.226412 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:23 crc kubenswrapper[4789]: I1124 11:30:23.226421 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:23 crc kubenswrapper[4789]: I1124 11:30:23.226447 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:23 crc kubenswrapper[4789]: I1124 11:30:23.226496 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:23 crc kubenswrapper[4789]: I1124 11:30:23.226511 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:23 crc kubenswrapper[4789]: I1124 11:30:23.611632 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:30:23 crc kubenswrapper[4789]: I1124 11:30:23.611852 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:23 crc kubenswrapper[4789]: I1124 11:30:23.613516 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:23 crc kubenswrapper[4789]: I1124 11:30:23.613590 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:23 crc kubenswrapper[4789]: I1124 11:30:23.613601 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:23 crc kubenswrapper[4789]: I1124 11:30:23.934402 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:30:24 crc kubenswrapper[4789]: I1124 11:30:24.197215 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:30:24 crc kubenswrapper[4789]: I1124 11:30:24.236163 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:24 crc kubenswrapper[4789]: I1124 11:30:24.236857 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:24 crc kubenswrapper[4789]: I1124 11:30:24.237297 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d6de8128fc805cee430c87cf58589e199f4865449e9c5dd9c5575a334992cfb5"} Nov 24 11:30:24 crc kubenswrapper[4789]: I1124 11:30:24.237376 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:24 crc kubenswrapper[4789]: I1124 11:30:24.238832 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:24 crc kubenswrapper[4789]: I1124 11:30:24.238884 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:24 crc kubenswrapper[4789]: I1124 11:30:24.238930 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:24 crc kubenswrapper[4789]: I1124 11:30:24.238948 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:24 crc kubenswrapper[4789]: I1124 11:30:24.238890 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:24 crc kubenswrapper[4789]: I1124 11:30:24.239397 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:24 crc kubenswrapper[4789]: I1124 11:30:24.238854 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:24 crc kubenswrapper[4789]: I1124 11:30:24.239605 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:24 crc kubenswrapper[4789]: I1124 11:30:24.239650 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:24 crc kubenswrapper[4789]: I1124 11:30:24.547223 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:24 crc kubenswrapper[4789]: I1124 11:30:24.549186 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:24 crc kubenswrapper[4789]: I1124 11:30:24.549243 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:24 crc kubenswrapper[4789]: I1124 11:30:24.549260 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:24 crc kubenswrapper[4789]: I1124 11:30:24.549299 4789 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 11:30:25 crc kubenswrapper[4789]: I1124 11:30:25.198229 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:30:25 crc kubenswrapper[4789]: I1124 11:30:25.239383 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:25 crc kubenswrapper[4789]: I1124 11:30:25.239436 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:25 crc kubenswrapper[4789]: I1124 11:30:25.240585 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:25 crc kubenswrapper[4789]: I1124 11:30:25.241180 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:25 crc kubenswrapper[4789]: I1124 11:30:25.241243 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:25 crc kubenswrapper[4789]: I1124 11:30:25.241268 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:25 crc kubenswrapper[4789]: I1124 11:30:25.241706 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:25 crc kubenswrapper[4789]: I1124 11:30:25.241950 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:25 crc kubenswrapper[4789]: I1124 11:30:25.242015 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:25 crc kubenswrapper[4789]: I1124 11:30:25.242057 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:25 crc kubenswrapper[4789]: I1124 11:30:25.242161 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:25 crc kubenswrapper[4789]: I1124 11:30:25.242120 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:26 crc kubenswrapper[4789]: I1124 11:30:26.257223 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:30:26 crc kubenswrapper[4789]: I1124 11:30:26.257422 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:26 crc kubenswrapper[4789]: I1124 11:30:26.258941 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:26 crc kubenswrapper[4789]: I1124 11:30:26.259001 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:26 crc kubenswrapper[4789]: I1124 11:30:26.259024 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:27 crc kubenswrapper[4789]: I1124 11:30:27.463978 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 24 11:30:27 crc kubenswrapper[4789]: I1124 11:30:27.464231 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:27 crc kubenswrapper[4789]: I1124 11:30:27.466096 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:27 crc kubenswrapper[4789]: I1124 11:30:27.466182 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:27 crc kubenswrapper[4789]: I1124 11:30:27.466209 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:28 crc kubenswrapper[4789]: E1124 11:30:28.240339 4789 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 24 11:30:29 crc kubenswrapper[4789]: I1124 11:30:29.257898 4789 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 24 11:30:29 crc kubenswrapper[4789]: I1124 11:30:29.257996 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 11:30:31 crc kubenswrapper[4789]: W1124 11:30:31.838689 4789 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 24 11:30:31 crc kubenswrapper[4789]: I1124 11:30:31.839307 4789 trace.go:236] Trace[1182231246]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 11:30:21.837) (total time: 10001ms): Nov 24 11:30:31 crc kubenswrapper[4789]: Trace[1182231246]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:30:31.838) Nov 24 11:30:31 crc kubenswrapper[4789]: Trace[1182231246]: [10.001551706s] [10.001551706s] END Nov 24 11:30:31 crc kubenswrapper[4789]: E1124 11:30:31.839347 4789 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 24 11:30:32 crc kubenswrapper[4789]: I1124 11:30:32.081953 4789 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 24 11:30:32 crc kubenswrapper[4789]: W1124 11:30:32.099760 4789 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 24 11:30:32 crc kubenswrapper[4789]: I1124 11:30:32.099847 4789 trace.go:236] Trace[121231717]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 11:30:22.098) (total time: 10001ms): Nov 24 11:30:32 crc kubenswrapper[4789]: Trace[121231717]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:30:32.099) Nov 24 11:30:32 crc kubenswrapper[4789]: Trace[121231717]: [10.0011453s] [10.0011453s] END Nov 24 11:30:32 crc kubenswrapper[4789]: E1124 11:30:32.099871 4789 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 24 11:30:32 crc kubenswrapper[4789]: I1124 11:30:32.260486 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 24 11:30:32 crc kubenswrapper[4789]: I1124 11:30:32.262089 4789 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1fd6d65d4251753aa6ff29e27cd70770dc5f08eb51cc717f789e65ac4a3ac7ba" exitCode=255 Nov 24 11:30:32 crc kubenswrapper[4789]: I1124 11:30:32.262162 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"1fd6d65d4251753aa6ff29e27cd70770dc5f08eb51cc717f789e65ac4a3ac7ba"} Nov 24 11:30:32 crc kubenswrapper[4789]: I1124 11:30:32.262363 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:32 crc kubenswrapper[4789]: I1124 11:30:32.263292 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:32 crc kubenswrapper[4789]: I1124 11:30:32.263321 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:32 crc kubenswrapper[4789]: I1124 11:30:32.263333 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:32 crc kubenswrapper[4789]: I1124 11:30:32.263937 4789 scope.go:117] "RemoveContainer" containerID="1fd6d65d4251753aa6ff29e27cd70770dc5f08eb51cc717f789e65ac4a3ac7ba" Nov 24 11:30:32 crc kubenswrapper[4789]: I1124 11:30:32.436370 4789 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 24 11:30:32 crc kubenswrapper[4789]: I1124 11:30:32.436444 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 24 11:30:32 crc kubenswrapper[4789]: I1124 11:30:32.444535 4789 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 24 11:30:32 crc kubenswrapper[4789]: I1124 11:30:32.444972 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 24 11:30:33 crc kubenswrapper[4789]: I1124 11:30:33.089609 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 24 11:30:33 crc kubenswrapper[4789]: I1124 11:30:33.089864 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:33 crc kubenswrapper[4789]: I1124 11:30:33.091606 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:33 crc kubenswrapper[4789]: I1124 11:30:33.091670 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:33 crc kubenswrapper[4789]: I1124 11:30:33.091681 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:33 crc kubenswrapper[4789]: I1124 11:30:33.138651 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 24 11:30:33 crc kubenswrapper[4789]: I1124 11:30:33.267741 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 24 11:30:33 crc kubenswrapper[4789]: I1124 11:30:33.270012 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:33 crc kubenswrapper[4789]: I1124 11:30:33.271048 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004"} Nov 24 11:30:33 crc kubenswrapper[4789]: I1124 11:30:33.271354 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:33 crc kubenswrapper[4789]: I1124 11:30:33.272629 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:33 crc kubenswrapper[4789]: I1124 11:30:33.272679 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:33 crc kubenswrapper[4789]: I1124 11:30:33.272698 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:33 crc kubenswrapper[4789]: I1124 11:30:33.274170 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:33 crc kubenswrapper[4789]: I1124 11:30:33.274209 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:33 crc kubenswrapper[4789]: I1124 11:30:33.274222 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:33 crc kubenswrapper[4789]: I1124 11:30:33.289343 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 24 11:30:33 crc kubenswrapper[4789]: I1124 11:30:33.939601 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:30:33 crc kubenswrapper[4789]: I1124 11:30:33.939739 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:33 crc kubenswrapper[4789]: I1124 11:30:33.940741 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:33 crc kubenswrapper[4789]: I1124 11:30:33.940767 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:33 crc kubenswrapper[4789]: I1124 11:30:33.940777 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:34 crc kubenswrapper[4789]: I1124 11:30:34.272218 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:34 crc kubenswrapper[4789]: I1124 11:30:34.273147 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:34 crc kubenswrapper[4789]: I1124 11:30:34.273240 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:34 crc kubenswrapper[4789]: I1124 11:30:34.273264 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:35 crc kubenswrapper[4789]: I1124 11:30:35.203839 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:30:35 crc kubenswrapper[4789]: I1124 11:30:35.203974 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:35 crc kubenswrapper[4789]: I1124 11:30:35.204174 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:30:35 crc kubenswrapper[4789]: I1124 11:30:35.204938 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:35 crc kubenswrapper[4789]: I1124 11:30:35.204977 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:35 crc kubenswrapper[4789]: I1124 11:30:35.204994 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:35 crc kubenswrapper[4789]: I1124 11:30:35.209807 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:30:35 crc kubenswrapper[4789]: I1124 11:30:35.274322 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:35 crc kubenswrapper[4789]: I1124 11:30:35.278192 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:35 crc kubenswrapper[4789]: I1124 11:30:35.278415 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:35 crc kubenswrapper[4789]: I1124 11:30:35.278666 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:35 crc kubenswrapper[4789]: I1124 11:30:35.490609 4789 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 24 11:30:36 crc kubenswrapper[4789]: I1124 11:30:36.277039 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:36 crc kubenswrapper[4789]: I1124 11:30:36.278337 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:36 crc kubenswrapper[4789]: I1124 11:30:36.278398 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:36 crc kubenswrapper[4789]: I1124 11:30:36.278419 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:36 crc kubenswrapper[4789]: I1124 11:30:36.530122 4789 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 24 11:30:37 crc kubenswrapper[4789]: E1124 11:30:37.449220 4789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Nov 24 11:30:37 crc kubenswrapper[4789]: I1124 11:30:37.454287 4789 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 24 11:30:37 crc kubenswrapper[4789]: I1124 11:30:37.455043 4789 trace.go:236] Trace[1711264922]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 11:30:25.357) (total time: 12097ms): Nov 24 11:30:37 crc kubenswrapper[4789]: Trace[1711264922]: ---"Objects listed" error: 12097ms (11:30:37.454) Nov 24 11:30:37 crc kubenswrapper[4789]: Trace[1711264922]: [12.097773038s] [12.097773038s] END Nov 24 11:30:37 crc kubenswrapper[4789]: I1124 11:30:37.455077 4789 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 24 11:30:37 crc kubenswrapper[4789]: I1124 11:30:37.455214 4789 trace.go:236] Trace[792239481]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 11:30:24.926) (total time: 12528ms): Nov 24 11:30:37 crc kubenswrapper[4789]: Trace[792239481]: ---"Objects listed" error: 12528ms (11:30:37.455) Nov 24 11:30:37 crc kubenswrapper[4789]: Trace[792239481]: [12.528618451s] [12.528618451s] END Nov 24 11:30:37 crc kubenswrapper[4789]: I1124 11:30:37.455252 4789 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 24 11:30:37 crc kubenswrapper[4789]: E1124 11:30:37.459748 4789 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 24 11:30:37 crc kubenswrapper[4789]: I1124 11:30:37.508818 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:30:37 crc kubenswrapper[4789]: I1124 11:30:37.514287 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.075499 4789 apiserver.go:52] "Watching apiserver" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.078783 4789 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.079150 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.079697 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:30:38 crc kubenswrapper[4789]: E1124 11:30:38.079799 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.080075 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.080175 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:30:38 crc kubenswrapper[4789]: E1124 11:30:38.080417 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.080526 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.080753 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:30:38 crc kubenswrapper[4789]: E1124 11:30:38.080807 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.080747 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.084150 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.084642 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.084889 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.085884 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.086087 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.086344 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.086440 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.086544 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.086521 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.089496 4789 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.126599 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.143261 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.157780 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.158938 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.159087 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.159223 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.159319 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.159417 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.159580 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.159682 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.159792 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.159903 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.159438 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.159992 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.159620 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.159677 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.159839 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.159924 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.160285 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.160424 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.160529 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.160571 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.160678 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.160757 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.160822 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.160915 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.160920 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.160981 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.161003 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.161021 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.161040 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.161269 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.161170 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.161233 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.161273 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.161294 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.161392 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.161419 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.161447 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.161493 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.161514 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.161540 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.161565 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.161585 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.161607 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.161615 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.161628 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.161653 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.161677 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.161673 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.161727 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.161716 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.161750 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.161753 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.161877 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.161900 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.161925 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.161945 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.161971 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162010 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162033 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162054 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162072 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162089 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162107 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162127 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162146 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162168 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162185 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162204 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162222 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162242 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162362 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162388 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162405 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162425 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162469 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162489 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162509 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162529 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162550 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162572 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162596 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162619 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162639 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162661 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162685 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162711 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162733 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162760 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162777 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162797 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162817 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162837 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162854 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162900 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162916 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162933 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162957 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162975 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162991 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.163013 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.163031 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.163098 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.161942 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.161967 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162228 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162306 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162378 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162499 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162544 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162590 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162674 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162674 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162779 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162825 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162834 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162939 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162994 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.162991 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.163115 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: E1124 11:30:38.163119 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:30:38.663099186 +0000 UTC m=+21.245570555 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.166716 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.166750 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.166777 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.166801 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.166824 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.166847 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.166867 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.166890 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.166910 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.166932 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.166952 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.166971 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.166991 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167009 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167028 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167051 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167080 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167105 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167129 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167146 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167167 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167189 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167205 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167221 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167237 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167255 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167272 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167289 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167306 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167324 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167342 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167360 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167379 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167400 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167416 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167432 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167555 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167577 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167597 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167615 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167632 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167651 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167711 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167731 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167749 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167768 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167788 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167808 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167827 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167843 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167862 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167886 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167907 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167926 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167982 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168008 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168028 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168044 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168063 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168090 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168117 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168156 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168182 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168208 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168230 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168255 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168279 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168301 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168322 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168340 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168359 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168427 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168449 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168503 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168527 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168547 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168567 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168585 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168604 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168624 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168643 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168665 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168684 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168704 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168726 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168745 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168763 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168784 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168801 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168822 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168840 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168858 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168879 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168930 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168951 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168968 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168987 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169006 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169026 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169044 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169063 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169080 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169104 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169122 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169142 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169162 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169181 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169199 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169218 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169235 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169257 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169275 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169331 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169366 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169384 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169409 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169430 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169450 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169483 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169507 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169527 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169551 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169571 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169590 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169610 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169631 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169711 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169726 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169737 4789 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169747 4789 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169765 4789 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169775 4789 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169785 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169797 4789 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169816 4789 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169830 4789 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169875 4789 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169888 4789 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169900 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169914 4789 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169925 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169937 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169950 4789 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169962 4789 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.167742 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.168854 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.163266 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.163279 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.163337 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.187777 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.163448 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.164421 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.164881 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.165102 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.165195 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.165303 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.165585 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.165657 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.165723 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.165806 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.165948 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.166162 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.166281 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.166333 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169145 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169219 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169482 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.169769 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.170201 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.170546 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.170577 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.170832 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.170919 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.170922 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.171686 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.171883 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.172750 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.172984 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.173405 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.173531 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.173654 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.173973 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.174084 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.174317 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.174728 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.174798 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.175777 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.176104 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.176161 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.176528 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.176636 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.176732 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.176619 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.177348 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.177399 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.177721 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.178498 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.179231 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.179496 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.179544 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.179854 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.180203 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.180383 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.180601 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.181099 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.181317 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.181447 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.181841 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.182089 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.182260 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.182420 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.182757 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.182821 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.182969 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.182981 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.183332 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.183832 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.183953 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.184075 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.184220 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.184636 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.184637 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.184721 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.185422 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.186153 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.186706 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.187039 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.187448 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.187687 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.163144 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.188283 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.188339 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.188518 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.188804 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.188827 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.188991 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.189084 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.189097 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.189145 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.189313 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.189410 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.189698 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.189917 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.189989 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.190013 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.190127 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.190149 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.190175 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.190335 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.190609 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.190878 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.185936 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.190911 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.190970 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.191050 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.191286 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.191354 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.191529 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.191719 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.191784 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.191806 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.192062 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.192075 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.192229 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.189755 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.192549 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.195693 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.195923 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.195941 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.195938 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.196141 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.196171 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.196451 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.196679 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.197031 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.197108 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.193149 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.197487 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.189898 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.198104 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.198355 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.198812 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.199059 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.199142 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.190277 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.199970 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.200384 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.200576 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.200640 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.200570 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.200772 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.200826 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.200866 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.191856 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.201062 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.201364 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:30:38 crc kubenswrapper[4789]: E1124 11:30:38.201237 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:30:38 crc kubenswrapper[4789]: E1124 11:30:38.201792 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:30:38 crc kubenswrapper[4789]: E1124 11:30:38.201883 4789 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:30:38 crc kubenswrapper[4789]: E1124 11:30:38.202016 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 11:30:38.701993929 +0000 UTC m=+21.284465378 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:30:38 crc kubenswrapper[4789]: E1124 11:30:38.201425 4789 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.202369 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.202369 4789 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 24 11:30:38 crc kubenswrapper[4789]: E1124 11:30:38.202383 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:30:38.702370358 +0000 UTC m=+21.284841817 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.201543 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.202724 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:30:38 crc kubenswrapper[4789]: E1124 11:30:38.203854 4789 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:30:38 crc kubenswrapper[4789]: E1124 11:30:38.203962 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:30:38.703935575 +0000 UTC m=+21.286406954 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.204059 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.204368 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.214995 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.219531 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.222763 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.223238 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.223808 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.225604 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.227416 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.227565 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.229447 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: E1124 11:30:38.229572 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:30:38 crc kubenswrapper[4789]: E1124 11:30:38.229633 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:30:38 crc kubenswrapper[4789]: E1124 11:30:38.229654 4789 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:30:38 crc kubenswrapper[4789]: E1124 11:30:38.231385 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 11:30:38.729709461 +0000 UTC m=+21.312181020 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.233298 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.234724 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.236142 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.237165 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.238913 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.238895 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.239417 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.242209 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.242895 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.244097 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.247014 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.251170 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.251526 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.253915 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.256095 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.256507 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.257982 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.258745 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.260424 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.261685 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.262447 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.264031 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.265045 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.267030 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.267102 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.267752 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.269195 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.269896 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.270615 4789 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.271282 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.271312 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.271359 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.271425 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.271445 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.271475 4789 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.271490 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.271501 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.271514 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.271523 4789 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.271522 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.271534 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.271581 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.271585 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.271613 4789 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.271627 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.271642 4789 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.271657 4789 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.271669 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.271680 4789 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.271691 4789 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.271703 4789 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.271716 4789 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.271726 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.271735 4789 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.271745 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.271754 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.271763 4789 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.271772 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.271782 4789 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.271793 4789 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.271768 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.271802 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.273143 4789 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.273169 4789 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.273182 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.273192 4789 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.273202 4789 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.273210 4789 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.273219 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.273228 4789 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.273239 4789 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.273249 4789 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.273270 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.273286 4789 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.273299 4789 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.273308 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.273318 4789 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.273331 4789 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.273342 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.273434 4789 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.273446 4789 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.273474 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.273485 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.273498 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.273509 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.273521 4789 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.273531 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.273542 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.273552 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.273560 4789 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.273581 4789 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.273593 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.273605 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.273615 4789 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.273627 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.273640 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.273730 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274318 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274367 4789 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274386 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274399 4789 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274412 4789 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274425 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274427 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274503 4789 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274517 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274530 4789 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274542 4789 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274557 4789 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274573 4789 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274585 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274601 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274617 4789 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274629 4789 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274642 4789 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274655 4789 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274667 4789 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274679 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274692 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274705 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274718 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274735 4789 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274746 4789 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274759 4789 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274771 4789 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274782 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274793 4789 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274805 4789 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274817 4789 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274829 4789 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274840 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274852 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274865 4789 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274877 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274889 4789 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274900 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274913 4789 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274924 4789 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274938 4789 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274951 4789 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274964 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274979 4789 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.274990 4789 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.275002 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.275015 4789 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.275026 4789 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.275037 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.275049 4789 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.275061 4789 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.275748 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.276577 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.276599 4789 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.276611 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.277980 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.278920 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.280213 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.281080 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.282827 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.283512 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.283568 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.283984 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.284403 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.284856 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.284909 4789 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.284948 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.284965 4789 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.284979 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.284996 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285010 4789 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285025 4789 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285038 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285051 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285064 4789 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285113 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285129 4789 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285144 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285161 4789 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285176 4789 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285192 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285205 4789 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285218 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285231 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285245 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285257 4789 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285271 4789 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285284 4789 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285297 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285309 4789 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285328 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285341 4789 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285361 4789 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285376 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285389 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285401 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285414 4789 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285426 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285438 4789 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285451 4789 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285484 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285496 4789 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285510 4789 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285522 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285536 4789 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285549 4789 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285561 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285574 4789 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285588 4789 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285600 4789 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285613 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285625 4789 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.285845 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.286754 4789 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.286796 4789 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.286812 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.286827 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.286844 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.286861 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.286875 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.286893 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.286906 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.286919 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.286939 4789 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.286952 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.286964 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.287008 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.287491 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.287851 4789 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004" exitCode=255 Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.288468 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.288974 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.290263 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.290722 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.291747 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.292194 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.292697 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.293892 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.294488 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.295571 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004"} Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.295657 4789 scope.go:117] "RemoveContainer" containerID="1fd6d65d4251753aa6ff29e27cd70770dc5f08eb51cc717f789e65ac4a3ac7ba" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.298763 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:30:38 crc kubenswrapper[4789]: E1124 11:30:38.311856 4789 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.317657 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.319406 4789 scope.go:117] "RemoveContainer" containerID="77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004" Nov 24 11:30:38 crc kubenswrapper[4789]: E1124 11:30:38.319773 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.320163 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.344190 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.365519 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.379859 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.389524 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.397954 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.399975 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.405353 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:30:38 crc kubenswrapper[4789]: W1124 11:30:38.413814 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-3d66ea37bd41b1d9381851b3d1f8e4f71ed047c17640d6e99f1277dbfd9b4ea2 WatchSource:0}: Error finding container 3d66ea37bd41b1d9381851b3d1f8e4f71ed047c17640d6e99f1277dbfd9b4ea2: Status 404 returned error can't find the container with id 3d66ea37bd41b1d9381851b3d1f8e4f71ed047c17640d6e99f1277dbfd9b4ea2 Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.414590 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:30:38 crc kubenswrapper[4789]: W1124 11:30:38.418292 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-5410df9cfe54c6d2a5ab0b5710d96bbf811c339d58695afe8a901d6be8c79baa WatchSource:0}: Error finding container 5410df9cfe54c6d2a5ab0b5710d96bbf811c339d58695afe8a901d6be8c79baa: Status 404 returned error can't find the container with id 5410df9cfe54c6d2a5ab0b5710d96bbf811c339d58695afe8a901d6be8c79baa Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.420355 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:30:38 crc kubenswrapper[4789]: W1124 11:30:38.428360 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-4cc416be501b6caa7f1d5d425722454c6362f819e120077a43a73d1a78cb7747 WatchSource:0}: Error finding container 4cc416be501b6caa7f1d5d425722454c6362f819e120077a43a73d1a78cb7747: Status 404 returned error can't find the container with id 4cc416be501b6caa7f1d5d425722454c6362f819e120077a43a73d1a78cb7747 Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.433326 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.450970 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.463947 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.478796 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fd6d65d4251753aa6ff29e27cd70770dc5f08eb51cc717f789e65ac4a3ac7ba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:31Z\\\",\\\"message\\\":\\\"W1124 11:30:21.257111 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:30:21.258254 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763983821 cert, and key in /tmp/serving-cert-2968144839/serving-signer.crt, /tmp/serving-cert-2968144839/serving-signer.key\\\\nI1124 11:30:21.541196 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:30:21.544627 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:30:21.544867 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:21.546787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2968144839/tls.crt::/tmp/serving-cert-2968144839/tls.key\\\\\\\"\\\\nF1124 11:30:31.880840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.493759 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.494267 4789 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.507496 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.519425 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.530244 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.689046 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:30:38 crc kubenswrapper[4789]: E1124 11:30:38.689224 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:30:39.689208015 +0000 UTC m=+22.271679394 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.790818 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.790914 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.790975 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:30:38 crc kubenswrapper[4789]: E1124 11:30:38.790983 4789 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:30:38 crc kubenswrapper[4789]: I1124 11:30:38.791012 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:30:38 crc kubenswrapper[4789]: E1124 11:30:38.791082 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:30:39.791057833 +0000 UTC m=+22.373529212 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:30:38 crc kubenswrapper[4789]: E1124 11:30:38.791342 4789 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:30:38 crc kubenswrapper[4789]: E1124 11:30:38.791371 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:30:38 crc kubenswrapper[4789]: E1124 11:30:38.791404 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:30:38 crc kubenswrapper[4789]: E1124 11:30:38.791421 4789 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:30:38 crc kubenswrapper[4789]: E1124 11:30:38.791433 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:30:39.791407981 +0000 UTC m=+22.373879520 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:30:38 crc kubenswrapper[4789]: E1124 11:30:38.791504 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:30:38 crc kubenswrapper[4789]: E1124 11:30:38.791580 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:30:38 crc kubenswrapper[4789]: E1124 11:30:38.791597 4789 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:30:38 crc kubenswrapper[4789]: E1124 11:30:38.791559 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 11:30:39.791534594 +0000 UTC m=+22.374006153 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:30:38 crc kubenswrapper[4789]: E1124 11:30:38.791703 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 11:30:39.791678898 +0000 UTC m=+22.374150277 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:30:39 crc kubenswrapper[4789]: I1124 11:30:39.295329 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35"} Nov 24 11:30:39 crc kubenswrapper[4789]: I1124 11:30:39.295435 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"3d66ea37bd41b1d9381851b3d1f8e4f71ed047c17640d6e99f1277dbfd9b4ea2"} Nov 24 11:30:39 crc kubenswrapper[4789]: I1124 11:30:39.298526 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 24 11:30:39 crc kubenswrapper[4789]: I1124 11:30:39.302041 4789 scope.go:117] "RemoveContainer" containerID="77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004" Nov 24 11:30:39 crc kubenswrapper[4789]: E1124 11:30:39.302280 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 24 11:30:39 crc kubenswrapper[4789]: I1124 11:30:39.303668 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"4cc416be501b6caa7f1d5d425722454c6362f819e120077a43a73d1a78cb7747"} Nov 24 11:30:39 crc kubenswrapper[4789]: I1124 11:30:39.306033 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"422480a045454133a17132666976f8e5a564759ab1bf7668e41ad1663eb4bc2c"} Nov 24 11:30:39 crc kubenswrapper[4789]: I1124 11:30:39.306126 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2dce8b517d8f914c50b708fd7d66e6e3796768ded1a0bcb0c5f575f124844c9c"} Nov 24 11:30:39 crc kubenswrapper[4789]: I1124 11:30:39.306155 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5410df9cfe54c6d2a5ab0b5710d96bbf811c339d58695afe8a901d6be8c79baa"} Nov 24 11:30:39 crc kubenswrapper[4789]: I1124 11:30:39.321199 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:39 crc kubenswrapper[4789]: I1124 11:30:39.339416 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:39 crc kubenswrapper[4789]: I1124 11:30:39.358762 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fd6d65d4251753aa6ff29e27cd70770dc5f08eb51cc717f789e65ac4a3ac7ba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:31Z\\\",\\\"message\\\":\\\"W1124 11:30:21.257111 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 11:30:21.258254 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763983821 cert, and key in /tmp/serving-cert-2968144839/serving-signer.crt, /tmp/serving-cert-2968144839/serving-signer.key\\\\nI1124 11:30:21.541196 1 observer_polling.go:159] Starting file observer\\\\nW1124 11:30:21.544627 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 11:30:21.544867 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:21.546787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2968144839/tls.crt::/tmp/serving-cert-2968144839/tls.key\\\\\\\"\\\\nF1124 11:30:31.880840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:39 crc kubenswrapper[4789]: I1124 11:30:39.374419 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:39 crc kubenswrapper[4789]: I1124 11:30:39.393002 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:39 crc kubenswrapper[4789]: I1124 11:30:39.410598 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:39 crc kubenswrapper[4789]: I1124 11:30:39.429237 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:39 crc kubenswrapper[4789]: I1124 11:30:39.445872 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:39 crc kubenswrapper[4789]: I1124 11:30:39.462651 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:39 crc kubenswrapper[4789]: I1124 11:30:39.480526 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:39 crc kubenswrapper[4789]: I1124 11:30:39.498716 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:39 crc kubenswrapper[4789]: I1124 11:30:39.512593 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:39 crc kubenswrapper[4789]: I1124 11:30:39.529142 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:39 crc kubenswrapper[4789]: I1124 11:30:39.544982 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422480a045454133a17132666976f8e5a564759ab1bf7668e41ad1663eb4bc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dce8b517d8f914c50b708fd7d66e6e3796768ded1a0bcb0c5f575f124844c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:39 crc kubenswrapper[4789]: I1124 11:30:39.559624 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:39 crc kubenswrapper[4789]: I1124 11:30:39.574277 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:39 crc kubenswrapper[4789]: I1124 11:30:39.698771 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:30:39 crc kubenswrapper[4789]: E1124 11:30:39.698943 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:30:41.698923642 +0000 UTC m=+24.281395021 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:30:39 crc kubenswrapper[4789]: I1124 11:30:39.800267 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:30:39 crc kubenswrapper[4789]: I1124 11:30:39.800718 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:30:39 crc kubenswrapper[4789]: I1124 11:30:39.800902 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:30:39 crc kubenswrapper[4789]: I1124 11:30:39.801062 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:30:39 crc kubenswrapper[4789]: E1124 11:30:39.800391 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:30:39 crc kubenswrapper[4789]: E1124 11:30:39.801348 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:30:39 crc kubenswrapper[4789]: E1124 11:30:39.801517 4789 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:30:39 crc kubenswrapper[4789]: E1124 11:30:39.801714 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 11:30:41.801690902 +0000 UTC m=+24.384162311 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:30:39 crc kubenswrapper[4789]: E1124 11:30:39.800802 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:30:39 crc kubenswrapper[4789]: E1124 11:30:39.802029 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:30:39 crc kubenswrapper[4789]: E1124 11:30:39.800936 4789 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:30:39 crc kubenswrapper[4789]: E1124 11:30:39.801142 4789 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:30:39 crc kubenswrapper[4789]: E1124 11:30:39.802232 4789 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:30:39 crc kubenswrapper[4789]: E1124 11:30:39.802402 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:30:41.802384369 +0000 UTC m=+24.384855768 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:30:39 crc kubenswrapper[4789]: E1124 11:30:39.803010 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:30:41.802978053 +0000 UTC m=+24.385449462 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:30:39 crc kubenswrapper[4789]: E1124 11:30:39.803245 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 11:30:41.803220399 +0000 UTC m=+24.385691828 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:30:40 crc kubenswrapper[4789]: I1124 11:30:40.169638 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:30:40 crc kubenswrapper[4789]: I1124 11:30:40.169697 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:30:40 crc kubenswrapper[4789]: I1124 11:30:40.169746 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:30:40 crc kubenswrapper[4789]: E1124 11:30:40.169786 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:30:40 crc kubenswrapper[4789]: E1124 11:30:40.169933 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:30:40 crc kubenswrapper[4789]: E1124 11:30:40.170031 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:30:40 crc kubenswrapper[4789]: I1124 11:30:40.173263 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 24 11:30:40 crc kubenswrapper[4789]: I1124 11:30:40.174203 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 24 11:30:40 crc kubenswrapper[4789]: I1124 11:30:40.175046 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 24 11:30:40 crc kubenswrapper[4789]: I1124 11:30:40.175787 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 24 11:30:40 crc kubenswrapper[4789]: I1124 11:30:40.176488 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 24 11:30:40 crc kubenswrapper[4789]: I1124 11:30:40.177165 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 24 11:30:40 crc kubenswrapper[4789]: I1124 11:30:40.308549 4789 scope.go:117] "RemoveContainer" containerID="77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004" Nov 24 11:30:40 crc kubenswrapper[4789]: E1124 11:30:40.308901 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 24 11:30:41 crc kubenswrapper[4789]: I1124 11:30:41.314847 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b941dfb57d7894426efab65a2f2f6a0cbb524c48c0657d493eefe51923f30711"} Nov 24 11:30:41 crc kubenswrapper[4789]: I1124 11:30:41.334583 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:41 crc kubenswrapper[4789]: I1124 11:30:41.357093 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422480a045454133a17132666976f8e5a564759ab1bf7668e41ad1663eb4bc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dce8b517d8f914c50b708fd7d66e6e3796768ded1a0bcb0c5f575f124844c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:41 crc kubenswrapper[4789]: I1124 11:30:41.379655 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b941dfb57d7894426efab65a2f2f6a0cbb524c48c0657d493eefe51923f30711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:41 crc kubenswrapper[4789]: I1124 11:30:41.404378 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:41 crc kubenswrapper[4789]: I1124 11:30:41.424732 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:41 crc kubenswrapper[4789]: I1124 11:30:41.441943 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:41 crc kubenswrapper[4789]: I1124 11:30:41.465297 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:41 crc kubenswrapper[4789]: I1124 11:30:41.482408 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:41 crc kubenswrapper[4789]: I1124 11:30:41.717006 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:30:41 crc kubenswrapper[4789]: E1124 11:30:41.717185 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:30:45.717149718 +0000 UTC m=+28.299621167 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:30:41 crc kubenswrapper[4789]: I1124 11:30:41.818202 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:30:41 crc kubenswrapper[4789]: I1124 11:30:41.818289 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:30:41 crc kubenswrapper[4789]: I1124 11:30:41.818345 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:30:41 crc kubenswrapper[4789]: E1124 11:30:41.818399 4789 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:30:41 crc kubenswrapper[4789]: I1124 11:30:41.818404 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:30:41 crc kubenswrapper[4789]: E1124 11:30:41.818560 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:30:45.818527335 +0000 UTC m=+28.400998754 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:30:41 crc kubenswrapper[4789]: E1124 11:30:41.818581 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:30:41 crc kubenswrapper[4789]: E1124 11:30:41.818657 4789 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:30:41 crc kubenswrapper[4789]: E1124 11:30:41.818691 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:30:41 crc kubenswrapper[4789]: E1124 11:30:41.818706 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:30:41 crc kubenswrapper[4789]: E1124 11:30:41.818728 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:30:41 crc kubenswrapper[4789]: E1124 11:30:41.818748 4789 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:30:41 crc kubenswrapper[4789]: E1124 11:30:41.818753 4789 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:30:41 crc kubenswrapper[4789]: E1124 11:30:41.818787 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:30:45.81875042 +0000 UTC m=+28.401221839 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:30:41 crc kubenswrapper[4789]: E1124 11:30:41.818828 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 11:30:45.818801951 +0000 UTC m=+28.401273370 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:30:41 crc kubenswrapper[4789]: E1124 11:30:41.818868 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 11:30:45.818849133 +0000 UTC m=+28.401320752 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:30:42 crc kubenswrapper[4789]: I1124 11:30:42.168715 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:30:42 crc kubenswrapper[4789]: E1124 11:30:42.168908 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:30:42 crc kubenswrapper[4789]: I1124 11:30:42.169400 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:30:42 crc kubenswrapper[4789]: E1124 11:30:42.169559 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:30:42 crc kubenswrapper[4789]: I1124 11:30:42.169757 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:30:42 crc kubenswrapper[4789]: E1124 11:30:42.169861 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:30:43 crc kubenswrapper[4789]: I1124 11:30:43.859872 4789 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:30:43 crc kubenswrapper[4789]: I1124 11:30:43.861410 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:43 crc kubenswrapper[4789]: I1124 11:30:43.861450 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:43 crc kubenswrapper[4789]: I1124 11:30:43.861472 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:43 crc kubenswrapper[4789]: I1124 11:30:43.861537 4789 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 11:30:43 crc kubenswrapper[4789]: I1124 11:30:43.871586 4789 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 24 11:30:43 crc kubenswrapper[4789]: I1124 11:30:43.871820 4789 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 24 11:30:43 crc kubenswrapper[4789]: I1124 11:30:43.872766 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:43 crc kubenswrapper[4789]: I1124 11:30:43.872865 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:43 crc kubenswrapper[4789]: I1124 11:30:43.872931 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:43 crc kubenswrapper[4789]: I1124 11:30:43.872997 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:43 crc kubenswrapper[4789]: I1124 11:30:43.873079 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:43Z","lastTransitionTime":"2025-11-24T11:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:43 crc kubenswrapper[4789]: E1124 11:30:43.896258 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4376b485-9285-482b-9f4e-acdea532ff82\\\",\\\"systemUUID\\\":\\\"48941845-60e3-4de0-ba49-51eec51285bb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:43Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:43 crc kubenswrapper[4789]: I1124 11:30:43.900136 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:43 crc kubenswrapper[4789]: I1124 11:30:43.900296 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:43 crc kubenswrapper[4789]: I1124 11:30:43.900353 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:43 crc kubenswrapper[4789]: I1124 11:30:43.900437 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:43 crc kubenswrapper[4789]: I1124 11:30:43.900517 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:43Z","lastTransitionTime":"2025-11-24T11:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:43 crc kubenswrapper[4789]: E1124 11:30:43.916754 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4376b485-9285-482b-9f4e-acdea532ff82\\\",\\\"systemUUID\\\":\\\"48941845-60e3-4de0-ba49-51eec51285bb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:43Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:43 crc kubenswrapper[4789]: I1124 11:30:43.920828 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:43 crc kubenswrapper[4789]: I1124 11:30:43.920867 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:43 crc kubenswrapper[4789]: I1124 11:30:43.920877 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:43 crc kubenswrapper[4789]: I1124 11:30:43.920891 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:43 crc kubenswrapper[4789]: I1124 11:30:43.920900 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:43Z","lastTransitionTime":"2025-11-24T11:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:43 crc kubenswrapper[4789]: E1124 11:30:43.935450 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4376b485-9285-482b-9f4e-acdea532ff82\\\",\\\"systemUUID\\\":\\\"48941845-60e3-4de0-ba49-51eec51285bb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:43Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:43 crc kubenswrapper[4789]: I1124 11:30:43.938471 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:43 crc kubenswrapper[4789]: I1124 11:30:43.938521 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:43 crc kubenswrapper[4789]: I1124 11:30:43.938530 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:43 crc kubenswrapper[4789]: I1124 11:30:43.938546 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:43 crc kubenswrapper[4789]: I1124 11:30:43.938557 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:43Z","lastTransitionTime":"2025-11-24T11:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:43 crc kubenswrapper[4789]: E1124 11:30:43.955442 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4376b485-9285-482b-9f4e-acdea532ff82\\\",\\\"systemUUID\\\":\\\"48941845-60e3-4de0-ba49-51eec51285bb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:43Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:43 crc kubenswrapper[4789]: I1124 11:30:43.959900 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:43 crc kubenswrapper[4789]: I1124 11:30:43.959935 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:43 crc kubenswrapper[4789]: I1124 11:30:43.959944 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:43 crc kubenswrapper[4789]: I1124 11:30:43.959959 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:43 crc kubenswrapper[4789]: I1124 11:30:43.959969 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:43Z","lastTransitionTime":"2025-11-24T11:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:43 crc kubenswrapper[4789]: E1124 11:30:43.974817 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:30:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4376b485-9285-482b-9f4e-acdea532ff82\\\",\\\"systemUUID\\\":\\\"48941845-60e3-4de0-ba49-51eec51285bb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:43Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:43 crc kubenswrapper[4789]: E1124 11:30:43.975005 4789 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 11:30:43 crc kubenswrapper[4789]: I1124 11:30:43.976924 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:43 crc kubenswrapper[4789]: I1124 11:30:43.976960 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:43 crc kubenswrapper[4789]: I1124 11:30:43.976969 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:43 crc kubenswrapper[4789]: I1124 11:30:43.976988 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:43 crc kubenswrapper[4789]: I1124 11:30:43.977000 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:43Z","lastTransitionTime":"2025-11-24T11:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.079518 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.079560 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.079571 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.079584 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.079595 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:44Z","lastTransitionTime":"2025-11-24T11:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.168392 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.168471 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.168481 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:30:44 crc kubenswrapper[4789]: E1124 11:30:44.168528 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:30:44 crc kubenswrapper[4789]: E1124 11:30:44.168613 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:30:44 crc kubenswrapper[4789]: E1124 11:30:44.168899 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.181741 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.181784 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.181794 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.181810 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.181820 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:44Z","lastTransitionTime":"2025-11-24T11:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.283708 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.283773 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.283785 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.283804 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.283815 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:44Z","lastTransitionTime":"2025-11-24T11:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.356181 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-5fgg5"] Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.356614 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.361304 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.362708 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.368410 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.368786 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.368791 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.381446 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n4hd6"] Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.382240 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-9czvn"] Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.382431 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.382496 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-bbbf7"] Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.382557 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.383166 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-vztqv"] Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.383504 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.383524 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-vztqv" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.385878 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.385903 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.385914 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.385928 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.385941 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:44Z","lastTransitionTime":"2025-11-24T11:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.394601 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.394871 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.395041 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.397487 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.397712 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.398263 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.398297 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.398309 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.398364 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.398519 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.398527 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.398284 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.404444 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.405041 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.405232 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.409993 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.410216 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.434336 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.458786 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.476683 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.488686 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.488721 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.488730 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.488742 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.488753 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:44Z","lastTransitionTime":"2025-11-24T11:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.491493 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422480a045454133a17132666976f8e5a564759ab1bf7668e41ad1663eb4bc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dce8b517d8f914c50b708fd7d66e6e3796768ded1a0bcb0c5f575f124844c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.504361 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b941dfb57d7894426efab65a2f2f6a0cbb524c48c0657d493eefe51923f30711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.518327 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.531549 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.546345 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a8eb8871-21cb-4fb0-92a4-02d4224ff2cc-cnibin\") pod \"multus-additional-cni-plugins-bbbf7\" (UID: \"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\") " pod="openshift-multus/multus-additional-cni-plugins-bbbf7" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.546388 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-run-systemd\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.546413 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/776a7cdb-6468-4e8a-8577-3535ff549781-host-var-lib-kubelet\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.546435 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/776a7cdb-6468-4e8a-8577-3535ff549781-hostroot\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.546452 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c6d361cd-fbb3-466d-9026-4c685922072f-ovnkube-config\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.546489 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c6d361cd-fbb3-466d-9026-4c685922072f-env-overrides\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.546503 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c6d361cd-fbb3-466d-9026-4c685922072f-ovnkube-script-lib\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.546520 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/776a7cdb-6468-4e8a-8577-3535ff549781-etc-kubernetes\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.546542 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a8eb8871-21cb-4fb0-92a4-02d4224ff2cc-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bbbf7\" (UID: \"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\") " pod="openshift-multus/multus-additional-cni-plugins-bbbf7" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.546656 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/30c4a832-f0e4-481b-a474-3ecea86049f6-rootfs\") pod \"machine-config-daemon-9czvn\" (UID: \"30c4a832-f0e4-481b-a474-3ecea86049f6\") " pod="openshift-machine-config-operator/machine-config-daemon-9czvn" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.546717 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/30c4a832-f0e4-481b-a474-3ecea86049f6-proxy-tls\") pod \"machine-config-daemon-9czvn\" (UID: \"30c4a832-f0e4-481b-a474-3ecea86049f6\") " pod="openshift-machine-config-operator/machine-config-daemon-9czvn" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.546741 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/776a7cdb-6468-4e8a-8577-3535ff549781-system-cni-dir\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.546777 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-run-ovn\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.546800 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/776a7cdb-6468-4e8a-8577-3535ff549781-host-var-lib-cni-bin\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.546823 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-systemd-units\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.546845 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/30c4a832-f0e4-481b-a474-3ecea86049f6-mcd-auth-proxy-config\") pod \"machine-config-daemon-9czvn\" (UID: \"30c4a832-f0e4-481b-a474-3ecea86049f6\") " pod="openshift-machine-config-operator/machine-config-daemon-9czvn" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.546873 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-var-lib-openvswitch\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.546900 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-host-cni-netd\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.546931 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a8eb8871-21cb-4fb0-92a4-02d4224ff2cc-os-release\") pod \"multus-additional-cni-plugins-bbbf7\" (UID: \"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\") " pod="openshift-multus/multus-additional-cni-plugins-bbbf7" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.547056 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/776a7cdb-6468-4e8a-8577-3535ff549781-host-run-k8s-cni-cncf-io\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.547102 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/776a7cdb-6468-4e8a-8577-3535ff549781-host-var-lib-cni-multus\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.547124 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-host-run-ovn-kubernetes\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.547145 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-host-cni-bin\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.547168 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpqpp\" (UniqueName: \"kubernetes.io/projected/a8eb8871-21cb-4fb0-92a4-02d4224ff2cc-kube-api-access-xpqpp\") pod \"multus-additional-cni-plugins-bbbf7\" (UID: \"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\") " pod="openshift-multus/multus-additional-cni-plugins-bbbf7" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.547190 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/776a7cdb-6468-4e8a-8577-3535ff549781-os-release\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.547228 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/776a7cdb-6468-4e8a-8577-3535ff549781-host-run-netns\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.547293 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/776a7cdb-6468-4e8a-8577-3535ff549781-cni-binary-copy\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.547319 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/776a7cdb-6468-4e8a-8577-3535ff549781-multus-conf-dir\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.547338 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/da80bfe1-36b3-4239-bf6e-a855a490290a-hosts-file\") pod \"node-resolver-vztqv\" (UID: \"da80bfe1-36b3-4239-bf6e-a855a490290a\") " pod="openshift-dns/node-resolver-vztqv" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.547357 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nz8q\" (UniqueName: \"kubernetes.io/projected/da80bfe1-36b3-4239-bf6e-a855a490290a-kube-api-access-6nz8q\") pod \"node-resolver-vztqv\" (UID: \"da80bfe1-36b3-4239-bf6e-a855a490290a\") " pod="openshift-dns/node-resolver-vztqv" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.547377 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-host-slash\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.547409 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-host-kubelet\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.547427 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-log-socket\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.547448 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q72sq\" (UniqueName: \"kubernetes.io/projected/30c4a832-f0e4-481b-a474-3ecea86049f6-kube-api-access-q72sq\") pod \"machine-config-daemon-9czvn\" (UID: \"30c4a832-f0e4-481b-a474-3ecea86049f6\") " pod="openshift-machine-config-operator/machine-config-daemon-9czvn" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.547491 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.547514 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/776a7cdb-6468-4e8a-8577-3535ff549781-host-run-multus-certs\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.547535 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-node-log\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.547554 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a8eb8871-21cb-4fb0-92a4-02d4224ff2cc-cni-binary-copy\") pod \"multus-additional-cni-plugins-bbbf7\" (UID: \"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\") " pod="openshift-multus/multus-additional-cni-plugins-bbbf7" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.547595 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a8eb8871-21cb-4fb0-92a4-02d4224ff2cc-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bbbf7\" (UID: \"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\") " pod="openshift-multus/multus-additional-cni-plugins-bbbf7" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.547614 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/776a7cdb-6468-4e8a-8577-3535ff549781-multus-socket-dir-parent\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.547600 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.547639 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ct4s\" (UniqueName: \"kubernetes.io/projected/776a7cdb-6468-4e8a-8577-3535ff549781-kube-api-access-2ct4s\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.547751 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-run-openvswitch\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.547778 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/776a7cdb-6468-4e8a-8577-3535ff549781-cnibin\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.547795 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/776a7cdb-6468-4e8a-8577-3535ff549781-multus-daemon-config\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.547811 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a8eb8871-21cb-4fb0-92a4-02d4224ff2cc-system-cni-dir\") pod \"multus-additional-cni-plugins-bbbf7\" (UID: \"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\") " pod="openshift-multus/multus-additional-cni-plugins-bbbf7" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.547829 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-etc-openvswitch\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.547847 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c6d361cd-fbb3-466d-9026-4c685922072f-ovn-node-metrics-cert\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.547864 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f7tm\" (UniqueName: \"kubernetes.io/projected/c6d361cd-fbb3-466d-9026-4c685922072f-kube-api-access-9f7tm\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.547881 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/776a7cdb-6468-4e8a-8577-3535ff549781-multus-cni-dir\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.547940 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-host-run-netns\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.565337 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5fgg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"776a7cdb-6468-4e8a-8577-3535ff549781\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ct4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5fgg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.579709 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.591286 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.591351 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.591363 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.591385 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.591767 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:44Z","lastTransitionTime":"2025-11-24T11:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.595302 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bbbf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.618603 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.638476 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.649100 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-var-lib-openvswitch\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.649153 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-host-cni-netd\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.649175 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a8eb8871-21cb-4fb0-92a4-02d4224ff2cc-os-release\") pod \"multus-additional-cni-plugins-bbbf7\" (UID: \"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\") " pod="openshift-multus/multus-additional-cni-plugins-bbbf7" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.649201 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-host-run-ovn-kubernetes\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.649224 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-host-cni-bin\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.649243 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/776a7cdb-6468-4e8a-8577-3535ff549781-host-run-k8s-cni-cncf-io\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.649261 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/776a7cdb-6468-4e8a-8577-3535ff549781-host-var-lib-cni-multus\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.649290 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpqpp\" (UniqueName: \"kubernetes.io/projected/a8eb8871-21cb-4fb0-92a4-02d4224ff2cc-kube-api-access-xpqpp\") pod \"multus-additional-cni-plugins-bbbf7\" (UID: \"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\") " pod="openshift-multus/multus-additional-cni-plugins-bbbf7" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.649313 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/776a7cdb-6468-4e8a-8577-3535ff549781-os-release\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.649331 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/776a7cdb-6468-4e8a-8577-3535ff549781-host-run-netns\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.649381 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-host-slash\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.649399 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/776a7cdb-6468-4e8a-8577-3535ff549781-cni-binary-copy\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.649416 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/776a7cdb-6468-4e8a-8577-3535ff549781-multus-conf-dir\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.649433 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/da80bfe1-36b3-4239-bf6e-a855a490290a-hosts-file\") pod \"node-resolver-vztqv\" (UID: \"da80bfe1-36b3-4239-bf6e-a855a490290a\") " pod="openshift-dns/node-resolver-vztqv" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.649450 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nz8q\" (UniqueName: \"kubernetes.io/projected/da80bfe1-36b3-4239-bf6e-a855a490290a-kube-api-access-6nz8q\") pod \"node-resolver-vztqv\" (UID: \"da80bfe1-36b3-4239-bf6e-a855a490290a\") " pod="openshift-dns/node-resolver-vztqv" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.649489 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-host-kubelet\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.649512 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-log-socket\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.649531 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q72sq\" (UniqueName: \"kubernetes.io/projected/30c4a832-f0e4-481b-a474-3ecea86049f6-kube-api-access-q72sq\") pod \"machine-config-daemon-9czvn\" (UID: \"30c4a832-f0e4-481b-a474-3ecea86049f6\") " pod="openshift-machine-config-operator/machine-config-daemon-9czvn" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.649548 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-node-log\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.649569 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.649592 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/776a7cdb-6468-4e8a-8577-3535ff549781-host-run-multus-certs\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.649609 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-run-openvswitch\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.649628 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a8eb8871-21cb-4fb0-92a4-02d4224ff2cc-cni-binary-copy\") pod \"multus-additional-cni-plugins-bbbf7\" (UID: \"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\") " pod="openshift-multus/multus-additional-cni-plugins-bbbf7" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.649647 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a8eb8871-21cb-4fb0-92a4-02d4224ff2cc-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bbbf7\" (UID: \"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\") " pod="openshift-multus/multus-additional-cni-plugins-bbbf7" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.649667 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/776a7cdb-6468-4e8a-8577-3535ff549781-multus-socket-dir-parent\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.649684 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ct4s\" (UniqueName: \"kubernetes.io/projected/776a7cdb-6468-4e8a-8577-3535ff549781-kube-api-access-2ct4s\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.649702 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a8eb8871-21cb-4fb0-92a4-02d4224ff2cc-system-cni-dir\") pod \"multus-additional-cni-plugins-bbbf7\" (UID: \"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\") " pod="openshift-multus/multus-additional-cni-plugins-bbbf7" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.649720 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/776a7cdb-6468-4e8a-8577-3535ff549781-cnibin\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.649739 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/776a7cdb-6468-4e8a-8577-3535ff549781-multus-daemon-config\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.649757 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-host-run-netns\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.649782 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-etc-openvswitch\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.649802 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c6d361cd-fbb3-466d-9026-4c685922072f-ovn-node-metrics-cert\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.649821 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f7tm\" (UniqueName: \"kubernetes.io/projected/c6d361cd-fbb3-466d-9026-4c685922072f-kube-api-access-9f7tm\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.649838 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/776a7cdb-6468-4e8a-8577-3535ff549781-multus-cni-dir\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.649856 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a8eb8871-21cb-4fb0-92a4-02d4224ff2cc-cnibin\") pod \"multus-additional-cni-plugins-bbbf7\" (UID: \"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\") " pod="openshift-multus/multus-additional-cni-plugins-bbbf7" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.649875 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-run-systemd\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.649903 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/776a7cdb-6468-4e8a-8577-3535ff549781-host-var-lib-kubelet\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.649919 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/776a7cdb-6468-4e8a-8577-3535ff549781-hostroot\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.649943 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c6d361cd-fbb3-466d-9026-4c685922072f-ovnkube-config\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.649959 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c6d361cd-fbb3-466d-9026-4c685922072f-env-overrides\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.649980 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c6d361cd-fbb3-466d-9026-4c685922072f-ovnkube-script-lib\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.649998 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/776a7cdb-6468-4e8a-8577-3535ff549781-etc-kubernetes\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.650020 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-run-ovn\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.650038 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a8eb8871-21cb-4fb0-92a4-02d4224ff2cc-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bbbf7\" (UID: \"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\") " pod="openshift-multus/multus-additional-cni-plugins-bbbf7" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.650055 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/30c4a832-f0e4-481b-a474-3ecea86049f6-rootfs\") pod \"machine-config-daemon-9czvn\" (UID: \"30c4a832-f0e4-481b-a474-3ecea86049f6\") " pod="openshift-machine-config-operator/machine-config-daemon-9czvn" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.650074 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/30c4a832-f0e4-481b-a474-3ecea86049f6-proxy-tls\") pod \"machine-config-daemon-9czvn\" (UID: \"30c4a832-f0e4-481b-a474-3ecea86049f6\") " pod="openshift-machine-config-operator/machine-config-daemon-9czvn" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.650089 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/776a7cdb-6468-4e8a-8577-3535ff549781-system-cni-dir\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.650106 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-systemd-units\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.650125 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/30c4a832-f0e4-481b-a474-3ecea86049f6-mcd-auth-proxy-config\") pod \"machine-config-daemon-9czvn\" (UID: \"30c4a832-f0e4-481b-a474-3ecea86049f6\") " pod="openshift-machine-config-operator/machine-config-daemon-9czvn" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.650160 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/776a7cdb-6468-4e8a-8577-3535ff549781-host-var-lib-cni-bin\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.650269 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/776a7cdb-6468-4e8a-8577-3535ff549781-host-var-lib-cni-bin\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.650319 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-var-lib-openvswitch\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.650342 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-host-cni-netd\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.650401 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a8eb8871-21cb-4fb0-92a4-02d4224ff2cc-os-release\") pod \"multus-additional-cni-plugins-bbbf7\" (UID: \"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\") " pod="openshift-multus/multus-additional-cni-plugins-bbbf7" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.650427 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-host-run-ovn-kubernetes\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.650451 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-host-cni-bin\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.650499 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/776a7cdb-6468-4e8a-8577-3535ff549781-host-run-k8s-cni-cncf-io\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.650544 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/776a7cdb-6468-4e8a-8577-3535ff549781-host-var-lib-cni-multus\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.650931 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/776a7cdb-6468-4e8a-8577-3535ff549781-os-release\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.650961 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/776a7cdb-6468-4e8a-8577-3535ff549781-host-run-netns\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.650989 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-host-slash\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.651774 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/776a7cdb-6468-4e8a-8577-3535ff549781-cni-binary-copy\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.651817 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/776a7cdb-6468-4e8a-8577-3535ff549781-multus-conf-dir\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.651853 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/da80bfe1-36b3-4239-bf6e-a855a490290a-hosts-file\") pod \"node-resolver-vztqv\" (UID: \"da80bfe1-36b3-4239-bf6e-a855a490290a\") " pod="openshift-dns/node-resolver-vztqv" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.652038 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-host-kubelet\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.652074 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-log-socket\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.652287 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-node-log\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.652336 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.652372 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/776a7cdb-6468-4e8a-8577-3535ff549781-host-run-multus-certs\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.652409 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-run-openvswitch\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.653004 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a8eb8871-21cb-4fb0-92a4-02d4224ff2cc-cni-binary-copy\") pod \"multus-additional-cni-plugins-bbbf7\" (UID: \"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\") " pod="openshift-multus/multus-additional-cni-plugins-bbbf7" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.653475 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a8eb8871-21cb-4fb0-92a4-02d4224ff2cc-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bbbf7\" (UID: \"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\") " pod="openshift-multus/multus-additional-cni-plugins-bbbf7" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.653535 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/776a7cdb-6468-4e8a-8577-3535ff549781-multus-socket-dir-parent\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.653689 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a8eb8871-21cb-4fb0-92a4-02d4224ff2cc-system-cni-dir\") pod \"multus-additional-cni-plugins-bbbf7\" (UID: \"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\") " pod="openshift-multus/multus-additional-cni-plugins-bbbf7" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.653734 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/776a7cdb-6468-4e8a-8577-3535ff549781-cnibin\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.654176 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/776a7cdb-6468-4e8a-8577-3535ff549781-multus-daemon-config\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.654220 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-host-run-netns\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.654246 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-etc-openvswitch\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.655428 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/30c4a832-f0e4-481b-a474-3ecea86049f6-rootfs\") pod \"machine-config-daemon-9czvn\" (UID: \"30c4a832-f0e4-481b-a474-3ecea86049f6\") " pod="openshift-machine-config-operator/machine-config-daemon-9czvn" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.655475 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-run-systemd\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.655492 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/776a7cdb-6468-4e8a-8577-3535ff549781-hostroot\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.655517 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a8eb8871-21cb-4fb0-92a4-02d4224ff2cc-cnibin\") pod \"multus-additional-cni-plugins-bbbf7\" (UID: \"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\") " pod="openshift-multus/multus-additional-cni-plugins-bbbf7" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.655535 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/776a7cdb-6468-4e8a-8577-3535ff549781-host-var-lib-kubelet\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.655547 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-systemd-units\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.655556 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/776a7cdb-6468-4e8a-8577-3535ff549781-system-cni-dir\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.655528 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.655656 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/776a7cdb-6468-4e8a-8577-3535ff549781-multus-cni-dir\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.655863 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-run-ovn\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.655887 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/776a7cdb-6468-4e8a-8577-3535ff549781-etc-kubernetes\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.656436 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c6d361cd-fbb3-466d-9026-4c685922072f-ovnkube-script-lib\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.656506 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a8eb8871-21cb-4fb0-92a4-02d4224ff2cc-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bbbf7\" (UID: \"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\") " pod="openshift-multus/multus-additional-cni-plugins-bbbf7" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.656929 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/30c4a832-f0e4-481b-a474-3ecea86049f6-mcd-auth-proxy-config\") pod \"machine-config-daemon-9czvn\" (UID: \"30c4a832-f0e4-481b-a474-3ecea86049f6\") " pod="openshift-machine-config-operator/machine-config-daemon-9czvn" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.657022 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c6d361cd-fbb3-466d-9026-4c685922072f-env-overrides\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.657159 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c6d361cd-fbb3-466d-9026-4c685922072f-ovnkube-config\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.663389 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/30c4a832-f0e4-481b-a474-3ecea86049f6-proxy-tls\") pod \"machine-config-daemon-9czvn\" (UID: \"30c4a832-f0e4-481b-a474-3ecea86049f6\") " pod="openshift-machine-config-operator/machine-config-daemon-9czvn" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.671943 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c6d361cd-fbb3-466d-9026-4c685922072f-ovn-node-metrics-cert\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.674034 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ct4s\" (UniqueName: \"kubernetes.io/projected/776a7cdb-6468-4e8a-8577-3535ff549781-kube-api-access-2ct4s\") pod \"multus-5fgg5\" (UID: \"776a7cdb-6468-4e8a-8577-3535ff549781\") " pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.675002 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q72sq\" (UniqueName: \"kubernetes.io/projected/30c4a832-f0e4-481b-a474-3ecea86049f6-kube-api-access-q72sq\") pod \"machine-config-daemon-9czvn\" (UID: \"30c4a832-f0e4-481b-a474-3ecea86049f6\") " pod="openshift-machine-config-operator/machine-config-daemon-9czvn" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.682552 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nz8q\" (UniqueName: \"kubernetes.io/projected/da80bfe1-36b3-4239-bf6e-a855a490290a-kube-api-access-6nz8q\") pod \"node-resolver-vztqv\" (UID: \"da80bfe1-36b3-4239-bf6e-a855a490290a\") " pod="openshift-dns/node-resolver-vztqv" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.684915 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpqpp\" (UniqueName: \"kubernetes.io/projected/a8eb8871-21cb-4fb0-92a4-02d4224ff2cc-kube-api-access-xpqpp\") pod \"multus-additional-cni-plugins-bbbf7\" (UID: \"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\") " pod="openshift-multus/multus-additional-cni-plugins-bbbf7" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.691303 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f7tm\" (UniqueName: \"kubernetes.io/projected/c6d361cd-fbb3-466d-9026-4c685922072f-kube-api-access-9f7tm\") pod \"ovnkube-node-n4hd6\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.694379 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.694415 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.694425 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.694440 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.694449 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:44Z","lastTransitionTime":"2025-11-24T11:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.695436 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422480a045454133a17132666976f8e5a564759ab1bf7668e41ad1663eb4bc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dce8b517d8f914c50b708fd7d66e6e3796768ded1a0bcb0c5f575f124844c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.700851 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.713328 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" Nov 24 11:30:44 crc kubenswrapper[4789]: W1124 11:30:44.713907 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6d361cd_fbb3_466d_9026_4c685922072f.slice/crio-369714fa1e537121e09a6c7963147c6fdbb6b5e6a73a97fcbf912ba24edec73c WatchSource:0}: Error finding container 369714fa1e537121e09a6c7963147c6fdbb6b5e6a73a97fcbf912ba24edec73c: Status 404 returned error can't find the container with id 369714fa1e537121e09a6c7963147c6fdbb6b5e6a73a97fcbf912ba24edec73c Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.715783 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b941dfb57d7894426efab65a2f2f6a0cbb524c48c0657d493eefe51923f30711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.722617 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" Nov 24 11:30:44 crc kubenswrapper[4789]: W1124 11:30:44.724316 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30c4a832_f0e4_481b_a474_3ecea86049f6.slice/crio-c53c82c61526bca094af4ea41243bb496e747ba3e54109e1ad6c1a3d90c5a63c WatchSource:0}: Error finding container c53c82c61526bca094af4ea41243bb496e747ba3e54109e1ad6c1a3d90c5a63c: Status 404 returned error can't find the container with id c53c82c61526bca094af4ea41243bb496e747ba3e54109e1ad6c1a3d90c5a63c Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.727802 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-vztqv" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.733396 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5fgg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"776a7cdb-6468-4e8a-8577-3535ff549781\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ct4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5fgg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:44 crc kubenswrapper[4789]: W1124 11:30:44.747036 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8eb8871_21cb_4fb0_92a4_02d4224ff2cc.slice/crio-3bbae068dfcc895c4793c0dfa03d838351ee21afbcdf8f25211d549293948ac3 WatchSource:0}: Error finding container 3bbae068dfcc895c4793c0dfa03d838351ee21afbcdf8f25211d549293948ac3: Status 404 returned error can't find the container with id 3bbae068dfcc895c4793c0dfa03d838351ee21afbcdf8f25211d549293948ac3 Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.748590 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c4a832-f0e4-481b-a474-3ecea86049f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9czvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.766846 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.786077 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.802432 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.802486 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.802497 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.802513 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.802524 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:44Z","lastTransitionTime":"2025-11-24T11:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.818296 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6d361cd-fbb3-466d-9026-4c685922072f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4hd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.833594 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vztqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da80bfe1-36b3-4239-bf6e-a855a490290a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nz8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vztqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:44Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.905396 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.905431 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.905439 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.905468 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.905479 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:44Z","lastTransitionTime":"2025-11-24T11:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:44 crc kubenswrapper[4789]: I1124 11:30:44.970753 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-5fgg5" Nov 24 11:30:44 crc kubenswrapper[4789]: W1124 11:30:44.984650 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod776a7cdb_6468_4e8a_8577_3535ff549781.slice/crio-16bcd18bb966d93a5b4cc55f7b7a0d31a2f1efe4d1781ab1b572950577431487 WatchSource:0}: Error finding container 16bcd18bb966d93a5b4cc55f7b7a0d31a2f1efe4d1781ab1b572950577431487: Status 404 returned error can't find the container with id 16bcd18bb966d93a5b4cc55f7b7a0d31a2f1efe4d1781ab1b572950577431487 Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.009381 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.009439 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.009449 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.009495 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.009571 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:45Z","lastTransitionTime":"2025-11-24T11:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.111499 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.111533 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.111541 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.111555 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.111565 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:45Z","lastTransitionTime":"2025-11-24T11:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.214083 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.214134 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.214155 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.214180 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.214189 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:45Z","lastTransitionTime":"2025-11-24T11:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.317101 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.317146 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.317155 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.317171 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.317182 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:45Z","lastTransitionTime":"2025-11-24T11:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.325605 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" event={"ID":"30c4a832-f0e4-481b-a474-3ecea86049f6","Type":"ContainerStarted","Data":"cb40689bf9e2d48e8dbd0827e82dc097464ab71edf0f871edc26ff8ed3508957"} Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.325657 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" event={"ID":"30c4a832-f0e4-481b-a474-3ecea86049f6","Type":"ContainerStarted","Data":"af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250"} Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.325670 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" event={"ID":"30c4a832-f0e4-481b-a474-3ecea86049f6","Type":"ContainerStarted","Data":"c53c82c61526bca094af4ea41243bb496e747ba3e54109e1ad6c1a3d90c5a63c"} Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.327310 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vztqv" event={"ID":"da80bfe1-36b3-4239-bf6e-a855a490290a","Type":"ContainerStarted","Data":"17faecc8b835016ac0c8868de42de9b0990ce6399926e949f319fc4a26a3257b"} Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.327359 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vztqv" event={"ID":"da80bfe1-36b3-4239-bf6e-a855a490290a","Type":"ContainerStarted","Data":"582e0c35e9c0adfdba5f4e11d675ff6552c4a32eb495b08d6feef88a83ef9046"} Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.329274 4789 generic.go:334] "Generic (PLEG): container finished" podID="a8eb8871-21cb-4fb0-92a4-02d4224ff2cc" containerID="0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed" exitCode=0 Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.329343 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" event={"ID":"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc","Type":"ContainerDied","Data":"0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed"} Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.329370 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" event={"ID":"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc","Type":"ContainerStarted","Data":"3bbae068dfcc895c4793c0dfa03d838351ee21afbcdf8f25211d549293948ac3"} Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.331275 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5fgg5" event={"ID":"776a7cdb-6468-4e8a-8577-3535ff549781","Type":"ContainerStarted","Data":"7a9c256912e5f9308382925d83cd341ff711fdd9fce20f0c76d22f59033bfbf8"} Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.331307 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5fgg5" event={"ID":"776a7cdb-6468-4e8a-8577-3535ff549781","Type":"ContainerStarted","Data":"16bcd18bb966d93a5b4cc55f7b7a0d31a2f1efe4d1781ab1b572950577431487"} Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.332685 4789 generic.go:334] "Generic (PLEG): container finished" podID="c6d361cd-fbb3-466d-9026-4c685922072f" containerID="84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6" exitCode=0 Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.332738 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" event={"ID":"c6d361cd-fbb3-466d-9026-4c685922072f","Type":"ContainerDied","Data":"84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6"} Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.332793 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" event={"ID":"c6d361cd-fbb3-466d-9026-4c685922072f","Type":"ContainerStarted","Data":"369714fa1e537121e09a6c7963147c6fdbb6b5e6a73a97fcbf912ba24edec73c"} Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.343258 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422480a045454133a17132666976f8e5a564759ab1bf7668e41ad1663eb4bc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dce8b517d8f914c50b708fd7d66e6e3796768ded1a0bcb0c5f575f124844c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.365324 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b941dfb57d7894426efab65a2f2f6a0cbb524c48c0657d493eefe51923f30711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.381449 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5fgg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"776a7cdb-6468-4e8a-8577-3535ff549781\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ct4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5fgg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.395836 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c4a832-f0e4-481b-a474-3ecea86049f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb40689bf9e2d48e8dbd0827e82dc097464ab71edf0f871edc26ff8ed3508957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9czvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.418793 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.423586 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.423626 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.423637 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.423653 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.423664 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:45Z","lastTransitionTime":"2025-11-24T11:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.436843 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.455560 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.472873 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.488426 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.526820 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.527261 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.527275 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.527292 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.527303 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:45Z","lastTransitionTime":"2025-11-24T11:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.528412 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6d361cd-fbb3-466d-9026-4c685922072f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4hd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.538912 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vztqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da80bfe1-36b3-4239-bf6e-a855a490290a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nz8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vztqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.557675 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.573120 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bbbf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.583850 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b941dfb57d7894426efab65a2f2f6a0cbb524c48c0657d493eefe51923f30711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.599582 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5fgg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"776a7cdb-6468-4e8a-8577-3535ff549781\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a9c256912e5f9308382925d83cd341ff711fdd9fce20f0c76d22f59033bfbf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ct4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5fgg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.613829 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c4a832-f0e4-481b-a474-3ecea86049f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb40689bf9e2d48e8dbd0827e82dc097464ab71edf0f871edc26ff8ed3508957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9czvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.629657 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.629692 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.629703 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.629719 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.629731 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:45Z","lastTransitionTime":"2025-11-24T11:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.636169 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.655049 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.668331 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.681693 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422480a045454133a17132666976f8e5a564759ab1bf7668e41ad1663eb4bc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dce8b517d8f914c50b708fd7d66e6e3796768ded1a0bcb0c5f575f124844c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.706451 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.731274 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.733053 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.733078 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.733086 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.733098 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.733107 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:45Z","lastTransitionTime":"2025-11-24T11:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.762683 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:30:45 crc kubenswrapper[4789]: E1124 11:30:45.762912 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:30:53.762892364 +0000 UTC m=+36.345363763 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.768052 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6d361cd-fbb3-466d-9026-4c685922072f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4hd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.818782 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vztqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da80bfe1-36b3-4239-bf6e-a855a490290a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17faecc8b835016ac0c8868de42de9b0990ce6399926e949f319fc4a26a3257b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nz8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vztqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.834706 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.834735 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.834744 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.834757 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.834766 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:45Z","lastTransitionTime":"2025-11-24T11:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.859247 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.864096 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.864138 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.864159 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.864186 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:30:45 crc kubenswrapper[4789]: E1124 11:30:45.864292 4789 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:30:45 crc kubenswrapper[4789]: E1124 11:30:45.864334 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:30:53.864322142 +0000 UTC m=+36.446793521 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:30:45 crc kubenswrapper[4789]: E1124 11:30:45.864368 4789 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:30:45 crc kubenswrapper[4789]: E1124 11:30:45.864389 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:30:53.864383913 +0000 UTC m=+36.446855292 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:30:45 crc kubenswrapper[4789]: E1124 11:30:45.864439 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:30:45 crc kubenswrapper[4789]: E1124 11:30:45.864450 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:30:45 crc kubenswrapper[4789]: E1124 11:30:45.864479 4789 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:30:45 crc kubenswrapper[4789]: E1124 11:30:45.864503 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 11:30:53.864496307 +0000 UTC m=+36.446967676 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:30:45 crc kubenswrapper[4789]: E1124 11:30:45.864545 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:30:45 crc kubenswrapper[4789]: E1124 11:30:45.864554 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:30:45 crc kubenswrapper[4789]: E1124 11:30:45.864561 4789 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:30:45 crc kubenswrapper[4789]: E1124 11:30:45.864578 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 11:30:53.864572648 +0000 UTC m=+36.447044027 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.877080 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bbbf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.891556 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-zthhc"] Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.891927 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-zthhc" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.893810 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.894306 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.894417 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.894581 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.911476 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.925384 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.936543 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.936578 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.936588 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.936605 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.936619 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:45Z","lastTransitionTime":"2025-11-24T11:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.949742 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6d361cd-fbb3-466d-9026-4c685922072f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4hd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.965281 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpwcx\" (UniqueName: \"kubernetes.io/projected/bc5c4f42-e991-449b-aa93-2dea9d61dbc4-kube-api-access-vpwcx\") pod \"node-ca-zthhc\" (UID: \"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\") " pod="openshift-image-registry/node-ca-zthhc" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.965358 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bc5c4f42-e991-449b-aa93-2dea9d61dbc4-serviceca\") pod \"node-ca-zthhc\" (UID: \"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\") " pod="openshift-image-registry/node-ca-zthhc" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.965385 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bc5c4f42-e991-449b-aa93-2dea9d61dbc4-host\") pod \"node-ca-zthhc\" (UID: \"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\") " pod="openshift-image-registry/node-ca-zthhc" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.967106 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vztqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da80bfe1-36b3-4239-bf6e-a855a490290a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17faecc8b835016ac0c8868de42de9b0990ce6399926e949f319fc4a26a3257b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nz8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vztqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.979746 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zthhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpwcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zthhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:45 crc kubenswrapper[4789]: I1124 11:30:45.993044 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.013572 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bbbf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.030018 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b941dfb57d7894426efab65a2f2f6a0cbb524c48c0657d493eefe51923f30711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.038296 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.038322 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.038330 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.038343 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.038354 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:46Z","lastTransitionTime":"2025-11-24T11:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.048184 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5fgg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"776a7cdb-6468-4e8a-8577-3535ff549781\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a9c256912e5f9308382925d83cd341ff711fdd9fce20f0c76d22f59033bfbf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ct4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5fgg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.064875 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c4a832-f0e4-481b-a474-3ecea86049f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb40689bf9e2d48e8dbd0827e82dc097464ab71edf0f871edc26ff8ed3508957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9czvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.066300 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bc5c4f42-e991-449b-aa93-2dea9d61dbc4-host\") pod \"node-ca-zthhc\" (UID: \"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\") " pod="openshift-image-registry/node-ca-zthhc" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.066336 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bc5c4f42-e991-449b-aa93-2dea9d61dbc4-serviceca\") pod \"node-ca-zthhc\" (UID: \"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\") " pod="openshift-image-registry/node-ca-zthhc" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.066378 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpwcx\" (UniqueName: \"kubernetes.io/projected/bc5c4f42-e991-449b-aa93-2dea9d61dbc4-kube-api-access-vpwcx\") pod \"node-ca-zthhc\" (UID: \"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\") " pod="openshift-image-registry/node-ca-zthhc" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.066551 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bc5c4f42-e991-449b-aa93-2dea9d61dbc4-host\") pod \"node-ca-zthhc\" (UID: \"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\") " pod="openshift-image-registry/node-ca-zthhc" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.067431 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bc5c4f42-e991-449b-aa93-2dea9d61dbc4-serviceca\") pod \"node-ca-zthhc\" (UID: \"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\") " pod="openshift-image-registry/node-ca-zthhc" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.077756 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.090480 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpwcx\" (UniqueName: \"kubernetes.io/projected/bc5c4f42-e991-449b-aa93-2dea9d61dbc4-kube-api-access-vpwcx\") pod \"node-ca-zthhc\" (UID: \"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\") " pod="openshift-image-registry/node-ca-zthhc" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.097103 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.114875 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.129094 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422480a045454133a17132666976f8e5a564759ab1bf7668e41ad1663eb4bc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dce8b517d8f914c50b708fd7d66e6e3796768ded1a0bcb0c5f575f124844c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.141065 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.141603 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.141703 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.141780 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.141847 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:46Z","lastTransitionTime":"2025-11-24T11:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.168519 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.168522 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.168682 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:30:46 crc kubenswrapper[4789]: E1124 11:30:46.169048 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:30:46 crc kubenswrapper[4789]: E1124 11:30:46.169065 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:30:46 crc kubenswrapper[4789]: E1124 11:30:46.169205 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.205519 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-zthhc" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.244553 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.244592 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.244606 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.244625 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.244636 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:46Z","lastTransitionTime":"2025-11-24T11:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.340183 4789 generic.go:334] "Generic (PLEG): container finished" podID="a8eb8871-21cb-4fb0-92a4-02d4224ff2cc" containerID="902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9" exitCode=0 Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.340264 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" event={"ID":"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc","Type":"ContainerDied","Data":"902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9"} Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.341443 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-zthhc" event={"ID":"bc5c4f42-e991-449b-aa93-2dea9d61dbc4","Type":"ContainerStarted","Data":"bfdfb2a6f0dad68c864622bfc384cbd023defdf25e9d4e209aa3af45ed76efed"} Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.353306 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.353345 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.353358 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.353374 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.353384 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:46Z","lastTransitionTime":"2025-11-24T11:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.356694 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.356825 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" event={"ID":"c6d361cd-fbb3-466d-9026-4c685922072f","Type":"ContainerStarted","Data":"b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf"} Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.356988 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" event={"ID":"c6d361cd-fbb3-466d-9026-4c685922072f","Type":"ContainerStarted","Data":"1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610"} Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.357011 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" event={"ID":"c6d361cd-fbb3-466d-9026-4c685922072f","Type":"ContainerStarted","Data":"e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82"} Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.357024 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" event={"ID":"c6d361cd-fbb3-466d-9026-4c685922072f","Type":"ContainerStarted","Data":"34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c"} Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.357037 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" event={"ID":"c6d361cd-fbb3-466d-9026-4c685922072f","Type":"ContainerStarted","Data":"6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e"} Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.357047 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" event={"ID":"c6d361cd-fbb3-466d-9026-4c685922072f","Type":"ContainerStarted","Data":"3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4"} Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.376879 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6d361cd-fbb3-466d-9026-4c685922072f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4hd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.389429 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vztqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da80bfe1-36b3-4239-bf6e-a855a490290a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17faecc8b835016ac0c8868de42de9b0990ce6399926e949f319fc4a26a3257b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nz8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vztqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.407116 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zthhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpwcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zthhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.421811 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.435633 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bbbf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.447822 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5fgg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"776a7cdb-6468-4e8a-8577-3535ff549781\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a9c256912e5f9308382925d83cd341ff711fdd9fce20f0c76d22f59033bfbf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ct4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5fgg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.455623 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.455656 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.455663 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.455678 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.455688 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:46Z","lastTransitionTime":"2025-11-24T11:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.461200 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c4a832-f0e4-481b-a474-3ecea86049f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb40689bf9e2d48e8dbd0827e82dc097464ab71edf0f871edc26ff8ed3508957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9czvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.474254 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.488689 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.503935 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.520709 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422480a045454133a17132666976f8e5a564759ab1bf7668e41ad1663eb4bc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dce8b517d8f914c50b708fd7d66e6e3796768ded1a0bcb0c5f575f124844c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.533792 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b941dfb57d7894426efab65a2f2f6a0cbb524c48c0657d493eefe51923f30711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.544316 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:46Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.558683 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.558733 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.558744 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.558764 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.558773 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:46Z","lastTransitionTime":"2025-11-24T11:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.661262 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.661321 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.661335 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.661356 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.661369 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:46Z","lastTransitionTime":"2025-11-24T11:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.763910 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.763947 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.763957 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.763971 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.763980 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:46Z","lastTransitionTime":"2025-11-24T11:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.866487 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.866537 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.866549 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.866563 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.866573 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:46Z","lastTransitionTime":"2025-11-24T11:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.968705 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.968932 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.968940 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.968953 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:46 crc kubenswrapper[4789]: I1124 11:30:46.968962 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:46Z","lastTransitionTime":"2025-11-24T11:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.070838 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.070880 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.070891 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.070907 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.070919 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:47Z","lastTransitionTime":"2025-11-24T11:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.173009 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.173048 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.173059 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.173073 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.173084 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:47Z","lastTransitionTime":"2025-11-24T11:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.276061 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.276092 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.276101 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.276113 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.276122 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:47Z","lastTransitionTime":"2025-11-24T11:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.362843 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-zthhc" event={"ID":"bc5c4f42-e991-449b-aa93-2dea9d61dbc4","Type":"ContainerStarted","Data":"74a73ebd6641a79c50641db01a42eaf7842b9700926f302b4f5e938efa5d865f"} Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.365983 4789 generic.go:334] "Generic (PLEG): container finished" podID="a8eb8871-21cb-4fb0-92a4-02d4224ff2cc" containerID="da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed" exitCode=0 Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.366028 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" event={"ID":"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc","Type":"ContainerDied","Data":"da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed"} Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.378919 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.378963 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.378974 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.378992 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.379005 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:47Z","lastTransitionTime":"2025-11-24T11:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.389792 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.425292 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6d361cd-fbb3-466d-9026-4c685922072f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4hd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.439860 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vztqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da80bfe1-36b3-4239-bf6e-a855a490290a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17faecc8b835016ac0c8868de42de9b0990ce6399926e949f319fc4a26a3257b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nz8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vztqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.451313 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zthhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74a73ebd6641a79c50641db01a42eaf7842b9700926f302b4f5e938efa5d865f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpwcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zthhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.467347 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.481956 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bbbf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.483157 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.483199 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.483208 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.483222 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.483231 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:47Z","lastTransitionTime":"2025-11-24T11:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.496230 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.509767 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.521819 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.533992 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422480a045454133a17132666976f8e5a564759ab1bf7668e41ad1663eb4bc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dce8b517d8f914c50b708fd7d66e6e3796768ded1a0bcb0c5f575f124844c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.547191 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b941dfb57d7894426efab65a2f2f6a0cbb524c48c0657d493eefe51923f30711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.561643 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5fgg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"776a7cdb-6468-4e8a-8577-3535ff549781\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a9c256912e5f9308382925d83cd341ff711fdd9fce20f0c76d22f59033bfbf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ct4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5fgg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.574687 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c4a832-f0e4-481b-a474-3ecea86049f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb40689bf9e2d48e8dbd0827e82dc097464ab71edf0f871edc26ff8ed3508957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9czvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.586623 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.586647 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.586654 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.586666 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.586675 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:47Z","lastTransitionTime":"2025-11-24T11:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.587818 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.603954 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.618415 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bbbf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.630768 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b941dfb57d7894426efab65a2f2f6a0cbb524c48c0657d493eefe51923f30711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.643449 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5fgg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"776a7cdb-6468-4e8a-8577-3535ff549781\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a9c256912e5f9308382925d83cd341ff711fdd9fce20f0c76d22f59033bfbf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ct4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5fgg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.653264 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c4a832-f0e4-481b-a474-3ecea86049f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb40689bf9e2d48e8dbd0827e82dc097464ab71edf0f871edc26ff8ed3508957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9czvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.666665 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.677442 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.689768 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.689909 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.689932 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.689941 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.689955 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.689980 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:47Z","lastTransitionTime":"2025-11-24T11:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.702641 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422480a045454133a17132666976f8e5a564759ab1bf7668e41ad1663eb4bc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dce8b517d8f914c50b708fd7d66e6e3796768ded1a0bcb0c5f575f124844c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.713927 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.726570 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.751390 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6d361cd-fbb3-466d-9026-4c685922072f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4hd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.763798 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vztqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da80bfe1-36b3-4239-bf6e-a855a490290a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17faecc8b835016ac0c8868de42de9b0990ce6399926e949f319fc4a26a3257b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nz8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vztqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.774840 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zthhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74a73ebd6641a79c50641db01a42eaf7842b9700926f302b4f5e938efa5d865f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpwcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zthhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.796054 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.796100 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.796112 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.796129 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.796141 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:47Z","lastTransitionTime":"2025-11-24T11:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.898446 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.898504 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.898514 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.898542 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:47 crc kubenswrapper[4789]: I1124 11:30:47.898553 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:47Z","lastTransitionTime":"2025-11-24T11:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.000369 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.000418 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.000426 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.000440 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.000475 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:48Z","lastTransitionTime":"2025-11-24T11:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.102519 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.102577 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.102595 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.102618 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.102635 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:48Z","lastTransitionTime":"2025-11-24T11:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.168502 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.168608 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:30:48 crc kubenswrapper[4789]: E1124 11:30:48.168638 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.168701 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:30:48 crc kubenswrapper[4789]: E1124 11:30:48.168845 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:30:48 crc kubenswrapper[4789]: E1124 11:30:48.168938 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.188093 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.205747 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.205795 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.205806 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.205824 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.205837 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:48Z","lastTransitionTime":"2025-11-24T11:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.206428 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bbbf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.220269 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5fgg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"776a7cdb-6468-4e8a-8577-3535ff549781\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a9c256912e5f9308382925d83cd341ff711fdd9fce20f0c76d22f59033bfbf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ct4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5fgg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.235549 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c4a832-f0e4-481b-a474-3ecea86049f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb40689bf9e2d48e8dbd0827e82dc097464ab71edf0f871edc26ff8ed3508957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9czvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.254849 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.268537 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.281273 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.297242 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422480a045454133a17132666976f8e5a564759ab1bf7668e41ad1663eb4bc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dce8b517d8f914c50b708fd7d66e6e3796768ded1a0bcb0c5f575f124844c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.307665 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.307698 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.307707 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.307721 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.307730 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:48Z","lastTransitionTime":"2025-11-24T11:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.316289 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b941dfb57d7894426efab65a2f2f6a0cbb524c48c0657d493eefe51923f30711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.337205 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.358335 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.372918 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" event={"ID":"c6d361cd-fbb3-466d-9026-4c685922072f","Type":"ContainerStarted","Data":"000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee"} Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.375338 4789 generic.go:334] "Generic (PLEG): container finished" podID="a8eb8871-21cb-4fb0-92a4-02d4224ff2cc" containerID="e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858" exitCode=0 Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.375412 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" event={"ID":"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc","Type":"ContainerDied","Data":"e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858"} Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.407401 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6d361cd-fbb3-466d-9026-4c685922072f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4hd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.413969 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.414012 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.414021 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.414036 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.414048 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:48Z","lastTransitionTime":"2025-11-24T11:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.427254 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vztqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da80bfe1-36b3-4239-bf6e-a855a490290a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17faecc8b835016ac0c8868de42de9b0990ce6399926e949f319fc4a26a3257b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nz8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vztqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.442035 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zthhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74a73ebd6641a79c50641db01a42eaf7842b9700926f302b4f5e938efa5d865f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpwcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zthhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.454518 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.471827 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vztqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da80bfe1-36b3-4239-bf6e-a855a490290a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17faecc8b835016ac0c8868de42de9b0990ce6399926e949f319fc4a26a3257b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nz8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vztqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.484444 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zthhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74a73ebd6641a79c50641db01a42eaf7842b9700926f302b4f5e938efa5d865f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpwcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zthhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.499091 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.517360 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.517400 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.517410 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.517424 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.517434 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:48Z","lastTransitionTime":"2025-11-24T11:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.522162 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6d361cd-fbb3-466d-9026-4c685922072f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4hd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.536980 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.551153 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bbbf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.565615 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.581402 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422480a045454133a17132666976f8e5a564759ab1bf7668e41ad1663eb4bc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dce8b517d8f914c50b708fd7d66e6e3796768ded1a0bcb0c5f575f124844c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.594181 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b941dfb57d7894426efab65a2f2f6a0cbb524c48c0657d493eefe51923f30711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.608662 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5fgg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"776a7cdb-6468-4e8a-8577-3535ff549781\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a9c256912e5f9308382925d83cd341ff711fdd9fce20f0c76d22f59033bfbf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ct4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5fgg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.620102 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.620133 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.620144 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.620157 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.620167 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:48Z","lastTransitionTime":"2025-11-24T11:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.621509 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c4a832-f0e4-481b-a474-3ecea86049f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb40689bf9e2d48e8dbd0827e82dc097464ab71edf0f871edc26ff8ed3508957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9czvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.636474 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.651664 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.722597 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.722849 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.722951 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.723090 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.723195 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:48Z","lastTransitionTime":"2025-11-24T11:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.826745 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.826777 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.826784 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.826798 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.826807 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:48Z","lastTransitionTime":"2025-11-24T11:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.929643 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.930160 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.930427 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.930666 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:48 crc kubenswrapper[4789]: I1124 11:30:48.930887 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:48Z","lastTransitionTime":"2025-11-24T11:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.033053 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.033092 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.033103 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.033119 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.033131 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:49Z","lastTransitionTime":"2025-11-24T11:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.135721 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.135754 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.135764 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.135779 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.135790 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:49Z","lastTransitionTime":"2025-11-24T11:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.238132 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.238196 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.238207 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.238227 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.238238 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:49Z","lastTransitionTime":"2025-11-24T11:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.342502 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.342546 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.342560 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.342579 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.342602 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:49Z","lastTransitionTime":"2025-11-24T11:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.382551 4789 generic.go:334] "Generic (PLEG): container finished" podID="a8eb8871-21cb-4fb0-92a4-02d4224ff2cc" containerID="50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f" exitCode=0 Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.382595 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" event={"ID":"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc","Type":"ContainerDied","Data":"50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f"} Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.400868 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.417686 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vztqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da80bfe1-36b3-4239-bf6e-a855a490290a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17faecc8b835016ac0c8868de42de9b0990ce6399926e949f319fc4a26a3257b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nz8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vztqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.432611 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zthhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74a73ebd6641a79c50641db01a42eaf7842b9700926f302b4f5e938efa5d865f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpwcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zthhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.447039 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.447094 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.447110 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.447132 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.447152 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:49Z","lastTransitionTime":"2025-11-24T11:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.451744 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.477020 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6d361cd-fbb3-466d-9026-4c685922072f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4hd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.490913 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.511252 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bbbf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.523345 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.535092 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422480a045454133a17132666976f8e5a564759ab1bf7668e41ad1663eb4bc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dce8b517d8f914c50b708fd7d66e6e3796768ded1a0bcb0c5f575f124844c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.550082 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b941dfb57d7894426efab65a2f2f6a0cbb524c48c0657d493eefe51923f30711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.555671 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.555824 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.555861 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.555936 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.555977 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:49Z","lastTransitionTime":"2025-11-24T11:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.570482 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5fgg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"776a7cdb-6468-4e8a-8577-3535ff549781\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a9c256912e5f9308382925d83cd341ff711fdd9fce20f0c76d22f59033bfbf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ct4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5fgg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.583098 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c4a832-f0e4-481b-a474-3ecea86049f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb40689bf9e2d48e8dbd0827e82dc097464ab71edf0f871edc26ff8ed3508957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9czvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.597288 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.610678 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.657465 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.657623 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.657732 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.657830 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.658058 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:49Z","lastTransitionTime":"2025-11-24T11:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.760271 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.760305 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.760314 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.760326 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.760335 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:49Z","lastTransitionTime":"2025-11-24T11:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.863759 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.864447 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.864758 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.864992 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.865245 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:49Z","lastTransitionTime":"2025-11-24T11:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.968701 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.968945 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.969122 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.969222 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:49 crc kubenswrapper[4789]: I1124 11:30:49.969298 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:49Z","lastTransitionTime":"2025-11-24T11:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.072726 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.072761 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.072772 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.072788 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.072799 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:50Z","lastTransitionTime":"2025-11-24T11:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.168985 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.168992 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.169088 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:30:50 crc kubenswrapper[4789]: E1124 11:30:50.169248 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:30:50 crc kubenswrapper[4789]: E1124 11:30:50.169442 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:30:50 crc kubenswrapper[4789]: E1124 11:30:50.169974 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.175089 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.175130 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.175146 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.175165 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.175181 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:50Z","lastTransitionTime":"2025-11-24T11:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.278047 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.278104 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.278121 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.278152 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.278174 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:50Z","lastTransitionTime":"2025-11-24T11:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.380060 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.380439 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.380448 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.380474 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.380483 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:50Z","lastTransitionTime":"2025-11-24T11:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.396305 4789 generic.go:334] "Generic (PLEG): container finished" podID="a8eb8871-21cb-4fb0-92a4-02d4224ff2cc" containerID="cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03" exitCode=0 Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.396412 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" event={"ID":"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc","Type":"ContainerDied","Data":"cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03"} Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.416366 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.431262 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422480a045454133a17132666976f8e5a564759ab1bf7668e41ad1663eb4bc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dce8b517d8f914c50b708fd7d66e6e3796768ded1a0bcb0c5f575f124844c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.445961 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b941dfb57d7894426efab65a2f2f6a0cbb524c48c0657d493eefe51923f30711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.462779 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5fgg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"776a7cdb-6468-4e8a-8577-3535ff549781\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a9c256912e5f9308382925d83cd341ff711fdd9fce20f0c76d22f59033bfbf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ct4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5fgg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.478006 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c4a832-f0e4-481b-a474-3ecea86049f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb40689bf9e2d48e8dbd0827e82dc097464ab71edf0f871edc26ff8ed3508957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9czvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.482278 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.482304 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.482312 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.482325 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.482335 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:50Z","lastTransitionTime":"2025-11-24T11:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.491243 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.504303 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.517708 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.527959 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vztqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da80bfe1-36b3-4239-bf6e-a855a490290a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17faecc8b835016ac0c8868de42de9b0990ce6399926e949f319fc4a26a3257b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nz8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vztqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.539365 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zthhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74a73ebd6641a79c50641db01a42eaf7842b9700926f302b4f5e938efa5d865f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpwcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zthhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.552403 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.571610 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6d361cd-fbb3-466d-9026-4c685922072f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4hd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.585616 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.585646 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.585657 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.585675 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.585686 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:50Z","lastTransitionTime":"2025-11-24T11:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.586441 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.600785 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bbbf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.688214 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.688252 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.688263 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.688278 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.688290 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:50Z","lastTransitionTime":"2025-11-24T11:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.791210 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.791246 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.791256 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.791271 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.791282 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:50Z","lastTransitionTime":"2025-11-24T11:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.894164 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.894208 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.894221 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.894237 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.894250 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:50Z","lastTransitionTime":"2025-11-24T11:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.996742 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.996784 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.996793 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.996807 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:50 crc kubenswrapper[4789]: I1124 11:30:50.996816 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:50Z","lastTransitionTime":"2025-11-24T11:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.100091 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.100144 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.100160 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.100184 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.100202 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:51Z","lastTransitionTime":"2025-11-24T11:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.203770 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.203807 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.203817 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.203831 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.203869 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:51Z","lastTransitionTime":"2025-11-24T11:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.307206 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.307277 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.307287 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.307305 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.307316 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:51Z","lastTransitionTime":"2025-11-24T11:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.404715 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" event={"ID":"c6d361cd-fbb3-466d-9026-4c685922072f","Type":"ContainerStarted","Data":"e5ba041f3d56932dc730eccd02af156e610a234d52b947ce13ecea98369d97a8"} Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.405098 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.409924 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.409953 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.409962 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.409976 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.409987 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:51Z","lastTransitionTime":"2025-11-24T11:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.412794 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" event={"ID":"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc","Type":"ContainerStarted","Data":"5fcd7ef8bfab3cbd56ad3f1df7b1d8aaf1459411f27649c7cd12dcde866d14ac"} Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.424103 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.436565 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.440479 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zthhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74a73ebd6641a79c50641db01a42eaf7842b9700926f302b4f5e938efa5d865f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpwcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zthhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.459132 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.482895 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6d361cd-fbb3-466d-9026-4c685922072f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ba041f3d56932dc730eccd02af156e610a234d52b947ce13ecea98369d97a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4hd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.497028 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vztqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da80bfe1-36b3-4239-bf6e-a855a490290a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17faecc8b835016ac0c8868de42de9b0990ce6399926e949f319fc4a26a3257b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nz8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vztqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.511933 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.515330 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.515377 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.515386 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.515401 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.515412 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:51Z","lastTransitionTime":"2025-11-24T11:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.536356 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bbbf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.551018 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422480a045454133a17132666976f8e5a564759ab1bf7668e41ad1663eb4bc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dce8b517d8f914c50b708fd7d66e6e3796768ded1a0bcb0c5f575f124844c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.566494 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b941dfb57d7894426efab65a2f2f6a0cbb524c48c0657d493eefe51923f30711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.583408 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5fgg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"776a7cdb-6468-4e8a-8577-3535ff549781\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a9c256912e5f9308382925d83cd341ff711fdd9fce20f0c76d22f59033bfbf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ct4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5fgg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.596240 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c4a832-f0e4-481b-a474-3ecea86049f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb40689bf9e2d48e8dbd0827e82dc097464ab71edf0f871edc26ff8ed3508957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9czvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.610688 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.617998 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.618037 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.618047 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.618060 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.618069 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:51Z","lastTransitionTime":"2025-11-24T11:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.623132 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.637822 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.653592 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.668841 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fcd7ef8bfab3cbd56ad3f1df7b1d8aaf1459411f27649c7cd12dcde866d14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bbbf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.684210 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5fgg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"776a7cdb-6468-4e8a-8577-3535ff549781\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a9c256912e5f9308382925d83cd341ff711fdd9fce20f0c76d22f59033bfbf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ct4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5fgg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.696355 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c4a832-f0e4-481b-a474-3ecea86049f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb40689bf9e2d48e8dbd0827e82dc097464ab71edf0f871edc26ff8ed3508957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9czvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.720523 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.720563 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.720576 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.720591 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.720604 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:51Z","lastTransitionTime":"2025-11-24T11:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.724833 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.741424 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.754958 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.768164 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422480a045454133a17132666976f8e5a564759ab1bf7668e41ad1663eb4bc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dce8b517d8f914c50b708fd7d66e6e3796768ded1a0bcb0c5f575f124844c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.781864 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b941dfb57d7894426efab65a2f2f6a0cbb524c48c0657d493eefe51923f30711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.795675 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.807365 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.822833 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.822863 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.822871 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.822884 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.822895 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:51Z","lastTransitionTime":"2025-11-24T11:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.825281 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6d361cd-fbb3-466d-9026-4c685922072f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ba041f3d56932dc730eccd02af156e610a234d52b947ce13ecea98369d97a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4hd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.835506 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vztqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da80bfe1-36b3-4239-bf6e-a855a490290a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17faecc8b835016ac0c8868de42de9b0990ce6399926e949f319fc4a26a3257b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nz8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vztqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.846047 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zthhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74a73ebd6641a79c50641db01a42eaf7842b9700926f302b4f5e938efa5d865f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpwcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zthhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.925448 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.925511 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.925523 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.925540 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:51 crc kubenswrapper[4789]: I1124 11:30:51.925552 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:51Z","lastTransitionTime":"2025-11-24T11:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.028066 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.028122 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.028139 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.028162 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.028178 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:52Z","lastTransitionTime":"2025-11-24T11:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.130918 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.131009 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.131026 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.131049 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.131065 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:52Z","lastTransitionTime":"2025-11-24T11:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.168699 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.168835 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:30:52 crc kubenswrapper[4789]: E1124 11:30:52.169082 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.169239 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:30:52 crc kubenswrapper[4789]: E1124 11:30:52.169439 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:30:52 crc kubenswrapper[4789]: E1124 11:30:52.169651 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.233524 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.233566 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.233583 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.233605 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.233621 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:52Z","lastTransitionTime":"2025-11-24T11:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.336487 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.336742 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.336847 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.336934 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.337041 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:52Z","lastTransitionTime":"2025-11-24T11:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.417786 4789 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.419393 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.439579 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.439686 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.439704 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.439735 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.439764 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:52Z","lastTransitionTime":"2025-11-24T11:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.451852 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.468330 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.488807 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.503233 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.520357 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422480a045454133a17132666976f8e5a564759ab1bf7668e41ad1663eb4bc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dce8b517d8f914c50b708fd7d66e6e3796768ded1a0bcb0c5f575f124844c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.537802 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b941dfb57d7894426efab65a2f2f6a0cbb524c48c0657d493eefe51923f30711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.542514 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.542581 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.542600 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.542964 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.543008 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:52Z","lastTransitionTime":"2025-11-24T11:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.557440 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5fgg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"776a7cdb-6468-4e8a-8577-3535ff549781\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a9c256912e5f9308382925d83cd341ff711fdd9fce20f0c76d22f59033bfbf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ct4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5fgg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.576539 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c4a832-f0e4-481b-a474-3ecea86049f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb40689bf9e2d48e8dbd0827e82dc097464ab71edf0f871edc26ff8ed3508957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9czvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.598852 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.613990 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.634665 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6d361cd-fbb3-466d-9026-4c685922072f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ba041f3d56932dc730eccd02af156e610a234d52b947ce13ecea98369d97a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4hd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.645400 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.645473 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.645487 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.645510 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.645527 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:52Z","lastTransitionTime":"2025-11-24T11:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.649410 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vztqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da80bfe1-36b3-4239-bf6e-a855a490290a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17faecc8b835016ac0c8868de42de9b0990ce6399926e949f319fc4a26a3257b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nz8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vztqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.661178 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zthhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74a73ebd6641a79c50641db01a42eaf7842b9700926f302b4f5e938efa5d865f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpwcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zthhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.677902 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.691994 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fcd7ef8bfab3cbd56ad3f1df7b1d8aaf1459411f27649c7cd12dcde866d14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bbbf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.748854 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.748931 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.748949 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.748976 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.748993 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:52Z","lastTransitionTime":"2025-11-24T11:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.852371 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.852411 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.852422 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.852438 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.852450 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:52Z","lastTransitionTime":"2025-11-24T11:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.954737 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.954781 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.954792 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.954807 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:52 crc kubenswrapper[4789]: I1124 11:30:52.954816 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:52Z","lastTransitionTime":"2025-11-24T11:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.056982 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.057020 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.057028 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.057072 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.057087 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:53Z","lastTransitionTime":"2025-11-24T11:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.159093 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.159146 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.159161 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.159184 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.159199 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:53Z","lastTransitionTime":"2025-11-24T11:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.261269 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.261304 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.261313 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.261328 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.261338 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:53Z","lastTransitionTime":"2025-11-24T11:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.363669 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.363789 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.363801 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.363812 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.363820 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:53Z","lastTransitionTime":"2025-11-24T11:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.420160 4789 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.471099 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.471134 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.471144 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.471161 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.471173 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:53Z","lastTransitionTime":"2025-11-24T11:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.573515 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.573543 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.573552 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.573564 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.573572 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:53Z","lastTransitionTime":"2025-11-24T11:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.675470 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.675508 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.675518 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.675533 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.675545 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:53Z","lastTransitionTime":"2025-11-24T11:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.778027 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.778063 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.778071 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.778083 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.778092 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:53Z","lastTransitionTime":"2025-11-24T11:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.860981 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:30:53 crc kubenswrapper[4789]: E1124 11:30:53.861215 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:31:09.861179681 +0000 UTC m=+52.443651070 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.880500 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.880542 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.880554 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.880569 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.880580 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:53Z","lastTransitionTime":"2025-11-24T11:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.962190 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.962232 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.962258 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.962283 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:30:53 crc kubenswrapper[4789]: E1124 11:30:53.962383 4789 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:30:53 crc kubenswrapper[4789]: E1124 11:30:53.962480 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:30:53 crc kubenswrapper[4789]: E1124 11:30:53.962498 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:30:53 crc kubenswrapper[4789]: E1124 11:30:53.962508 4789 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:30:53 crc kubenswrapper[4789]: E1124 11:30:53.962539 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:30:53 crc kubenswrapper[4789]: E1124 11:30:53.962398 4789 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:30:53 crc kubenswrapper[4789]: E1124 11:30:53.962581 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:30:53 crc kubenswrapper[4789]: E1124 11:30:53.962608 4789 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:30:53 crc kubenswrapper[4789]: E1124 11:30:53.962558 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 11:31:09.962544967 +0000 UTC m=+52.545016336 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:30:53 crc kubenswrapper[4789]: E1124 11:30:53.962690 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:31:09.962653549 +0000 UTC m=+52.545124968 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:30:53 crc kubenswrapper[4789]: E1124 11:30:53.962720 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:31:09.96270679 +0000 UTC m=+52.545178259 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:30:53 crc kubenswrapper[4789]: E1124 11:30:53.962741 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 11:31:09.962730951 +0000 UTC m=+52.545202370 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.982245 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.982278 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.982286 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.982297 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:53 crc kubenswrapper[4789]: I1124 11:30:53.982306 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:53Z","lastTransitionTime":"2025-11-24T11:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.085338 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.085383 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.085393 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.085411 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.085422 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:54Z","lastTransitionTime":"2025-11-24T11:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.169731 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.169876 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:30:54 crc kubenswrapper[4789]: E1124 11:30:54.170006 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.170118 4789 scope.go:117] "RemoveContainer" containerID="77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.170443 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:30:54 crc kubenswrapper[4789]: E1124 11:30:54.170566 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:30:54 crc kubenswrapper[4789]: E1124 11:30:54.170643 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.190194 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.190255 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.190273 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.190298 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.190316 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:54Z","lastTransitionTime":"2025-11-24T11:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.291382 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.291722 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.291743 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.291760 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.291771 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:54Z","lastTransitionTime":"2025-11-24T11:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:54 crc kubenswrapper[4789]: E1124 11:30:54.306533 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:30:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:30:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:30:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:30:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4376b485-9285-482b-9f4e-acdea532ff82\\\",\\\"systemUUID\\\":\\\"48941845-60e3-4de0-ba49-51eec51285bb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.312028 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.312062 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.312074 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.312093 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.312105 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:54Z","lastTransitionTime":"2025-11-24T11:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:54 crc kubenswrapper[4789]: E1124 11:30:54.324288 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:30:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:30:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:30:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:30:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4376b485-9285-482b-9f4e-acdea532ff82\\\",\\\"systemUUID\\\":\\\"48941845-60e3-4de0-ba49-51eec51285bb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.327758 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.327804 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.327813 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.327827 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.327838 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:54Z","lastTransitionTime":"2025-11-24T11:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:54 crc kubenswrapper[4789]: E1124 11:30:54.340113 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:30:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:30:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:30:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:30:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4376b485-9285-482b-9f4e-acdea532ff82\\\",\\\"systemUUID\\\":\\\"48941845-60e3-4de0-ba49-51eec51285bb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.343933 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.343964 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.343973 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.343986 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.343996 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:54Z","lastTransitionTime":"2025-11-24T11:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:54 crc kubenswrapper[4789]: E1124 11:30:54.356910 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:30:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:30:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:30:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:30:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4376b485-9285-482b-9f4e-acdea532ff82\\\",\\\"systemUUID\\\":\\\"48941845-60e3-4de0-ba49-51eec51285bb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.360831 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.360880 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.360896 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.360916 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.360931 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:54Z","lastTransitionTime":"2025-11-24T11:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:54 crc kubenswrapper[4789]: E1124 11:30:54.373290 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:30:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:30:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:30:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:30:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4376b485-9285-482b-9f4e-acdea532ff82\\\",\\\"systemUUID\\\":\\\"48941845-60e3-4de0-ba49-51eec51285bb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:54 crc kubenswrapper[4789]: E1124 11:30:54.373425 4789 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.374924 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.374951 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.374959 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.374973 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.374982 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:54Z","lastTransitionTime":"2025-11-24T11:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.425853 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.427611 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"015dc01f98a19f3885135cee8c8ee980f101ca61c40d316c0296bacfc3218400"} Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.427980 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.429707 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4hd6_c6d361cd-fbb3-466d-9026-4c685922072f/ovnkube-controller/0.log" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.432536 4789 generic.go:334] "Generic (PLEG): container finished" podID="c6d361cd-fbb3-466d-9026-4c685922072f" containerID="e5ba041f3d56932dc730eccd02af156e610a234d52b947ce13ecea98369d97a8" exitCode=1 Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.432582 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" event={"ID":"c6d361cd-fbb3-466d-9026-4c685922072f","Type":"ContainerDied","Data":"e5ba041f3d56932dc730eccd02af156e610a234d52b947ce13ecea98369d97a8"} Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.433180 4789 scope.go:117] "RemoveContainer" containerID="e5ba041f3d56932dc730eccd02af156e610a234d52b947ce13ecea98369d97a8" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.441128 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.459366 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fcd7ef8bfab3cbd56ad3f1df7b1d8aaf1459411f27649c7cd12dcde866d14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bbbf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.472391 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.479080 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.479114 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.479122 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.479138 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.479147 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:54Z","lastTransitionTime":"2025-11-24T11:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.485772 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422480a045454133a17132666976f8e5a564759ab1bf7668e41ad1663eb4bc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dce8b517d8f914c50b708fd7d66e6e3796768ded1a0bcb0c5f575f124844c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.497780 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b941dfb57d7894426efab65a2f2f6a0cbb524c48c0657d493eefe51923f30711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.511719 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5fgg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"776a7cdb-6468-4e8a-8577-3535ff549781\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a9c256912e5f9308382925d83cd341ff711fdd9fce20f0c76d22f59033bfbf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ct4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5fgg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.520669 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c4a832-f0e4-481b-a474-3ecea86049f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb40689bf9e2d48e8dbd0827e82dc097464ab71edf0f871edc26ff8ed3508957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9czvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.533120 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc01f98a19f3885135cee8c8ee980f101ca61c40d316c0296bacfc3218400\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.548307 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.564443 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.575940 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vztqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da80bfe1-36b3-4239-bf6e-a855a490290a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17faecc8b835016ac0c8868de42de9b0990ce6399926e949f319fc4a26a3257b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nz8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vztqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.580699 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.580745 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.580757 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.580774 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.580784 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:54Z","lastTransitionTime":"2025-11-24T11:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.593354 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zthhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74a73ebd6641a79c50641db01a42eaf7842b9700926f302b4f5e938efa5d865f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpwcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zthhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.612754 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.646218 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6d361cd-fbb3-466d-9026-4c685922072f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ba041f3d56932dc730eccd02af156e610a234d52b947ce13ecea98369d97a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4hd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.660759 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c4a832-f0e4-481b-a474-3ecea86049f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb40689bf9e2d48e8dbd0827e82dc097464ab71edf0f871edc26ff8ed3508957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9czvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.682852 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.682885 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.682894 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.682905 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.682914 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:54Z","lastTransitionTime":"2025-11-24T11:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.683962 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc01f98a19f3885135cee8c8ee980f101ca61c40d316c0296bacfc3218400\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.703080 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.716537 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.730389 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422480a045454133a17132666976f8e5a564759ab1bf7668e41ad1663eb4bc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dce8b517d8f914c50b708fd7d66e6e3796768ded1a0bcb0c5f575f124844c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.741162 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b941dfb57d7894426efab65a2f2f6a0cbb524c48c0657d493eefe51923f30711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.751522 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5fgg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"776a7cdb-6468-4e8a-8577-3535ff549781\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a9c256912e5f9308382925d83cd341ff711fdd9fce20f0c76d22f59033bfbf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ct4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5fgg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.764115 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.777055 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.784668 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.784702 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.784714 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.784731 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.784741 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:54Z","lastTransitionTime":"2025-11-24T11:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.802971 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6d361cd-fbb3-466d-9026-4c685922072f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ba041f3d56932dc730eccd02af156e610a234d52b947ce13ecea98369d97a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5ba041f3d56932dc730eccd02af156e610a234d52b947ce13ecea98369d97a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:30:53Z\\\",\\\"message\\\":\\\" Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:30:53.609526 5991 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:30:53.609898 5991 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:30:53.609929 5991 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:30:53.610048 5991 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:30:53.610649 5991 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1124 11:30:53.610917 5991 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:30:53.611674 5991 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4hd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.819192 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vztqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da80bfe1-36b3-4239-bf6e-a855a490290a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17faecc8b835016ac0c8868de42de9b0990ce6399926e949f319fc4a26a3257b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nz8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vztqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.828499 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zthhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74a73ebd6641a79c50641db01a42eaf7842b9700926f302b4f5e938efa5d865f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpwcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zthhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.842735 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.857779 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fcd7ef8bfab3cbd56ad3f1df7b1d8aaf1459411f27649c7cd12dcde866d14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bbbf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.886909 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.886955 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.886966 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.886981 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.886992 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:54Z","lastTransitionTime":"2025-11-24T11:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.988803 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.988844 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.988852 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.988869 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:54 crc kubenswrapper[4789]: I1124 11:30:54.988879 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:54Z","lastTransitionTime":"2025-11-24T11:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.091736 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.091780 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.091793 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.091811 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.091822 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:55Z","lastTransitionTime":"2025-11-24T11:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.194480 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.194504 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.194511 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.194524 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.194548 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:55Z","lastTransitionTime":"2025-11-24T11:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.296582 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.296614 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.296622 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.296634 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.296642 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:55Z","lastTransitionTime":"2025-11-24T11:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.399189 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.399248 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.399259 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.399274 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.399284 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:55Z","lastTransitionTime":"2025-11-24T11:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.438109 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4hd6_c6d361cd-fbb3-466d-9026-4c685922072f/ovnkube-controller/0.log" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.440791 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" event={"ID":"c6d361cd-fbb3-466d-9026-4c685922072f","Type":"ContainerStarted","Data":"955a3bd1c17a9abb17278636982b95e2af5da2d21aa9981776c102c57f0c1825"} Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.440833 4789 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.455318 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5fgg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"776a7cdb-6468-4e8a-8577-3535ff549781\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a9c256912e5f9308382925d83cd341ff711fdd9fce20f0c76d22f59033bfbf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ct4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5fgg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.467304 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c4a832-f0e4-481b-a474-3ecea86049f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb40689bf9e2d48e8dbd0827e82dc097464ab71edf0f871edc26ff8ed3508957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9czvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.481145 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc01f98a19f3885135cee8c8ee980f101ca61c40d316c0296bacfc3218400\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.492255 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.502039 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.502093 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.502105 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.502125 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.502148 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:55Z","lastTransitionTime":"2025-11-24T11:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.504833 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.521877 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422480a045454133a17132666976f8e5a564759ab1bf7668e41ad1663eb4bc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dce8b517d8f914c50b708fd7d66e6e3796768ded1a0bcb0c5f575f124844c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.532998 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b941dfb57d7894426efab65a2f2f6a0cbb524c48c0657d493eefe51923f30711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.545384 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.557051 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.573264 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6d361cd-fbb3-466d-9026-4c685922072f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955a3bd1c17a9abb17278636982b95e2af5da2d21aa9981776c102c57f0c1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5ba041f3d56932dc730eccd02af156e610a234d52b947ce13ecea98369d97a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:30:53Z\\\",\\\"message\\\":\\\" Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:30:53.609526 5991 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:30:53.609898 5991 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:30:53.609929 5991 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:30:53.610048 5991 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:30:53.610649 5991 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1124 11:30:53.610917 5991 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:30:53.611674 5991 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4hd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.581660 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vztqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da80bfe1-36b3-4239-bf6e-a855a490290a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17faecc8b835016ac0c8868de42de9b0990ce6399926e949f319fc4a26a3257b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nz8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vztqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.590489 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zthhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74a73ebd6641a79c50641db01a42eaf7842b9700926f302b4f5e938efa5d865f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpwcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zthhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.601036 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.603972 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.604149 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.604298 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.604519 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.604621 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:55Z","lastTransitionTime":"2025-11-24T11:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.614233 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fcd7ef8bfab3cbd56ad3f1df7b1d8aaf1459411f27649c7cd12dcde866d14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bbbf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.706628 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.706837 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.706986 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.707135 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.707193 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:55Z","lastTransitionTime":"2025-11-24T11:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.809974 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.810047 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.810065 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.810092 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.810112 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:55Z","lastTransitionTime":"2025-11-24T11:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.912765 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.912859 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.912912 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.912940 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:55 crc kubenswrapper[4789]: I1124 11:30:55.912956 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:55Z","lastTransitionTime":"2025-11-24T11:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.015993 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.016071 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.016094 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.016121 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.016144 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:56Z","lastTransitionTime":"2025-11-24T11:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.118742 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.118807 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.118824 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.118848 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.118867 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:56Z","lastTransitionTime":"2025-11-24T11:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.168353 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.168434 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.168353 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:30:56 crc kubenswrapper[4789]: E1124 11:30:56.168580 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:30:56 crc kubenswrapper[4789]: E1124 11:30:56.168732 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:30:56 crc kubenswrapper[4789]: E1124 11:30:56.168900 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.222091 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.222153 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.222174 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.222200 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.222223 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:56Z","lastTransitionTime":"2025-11-24T11:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.325413 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.325525 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.325565 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.325601 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.325624 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:56Z","lastTransitionTime":"2025-11-24T11:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.429173 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.429248 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.429274 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.429305 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.429323 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:56Z","lastTransitionTime":"2025-11-24T11:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.447185 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4hd6_c6d361cd-fbb3-466d-9026-4c685922072f/ovnkube-controller/1.log" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.448089 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4hd6_c6d361cd-fbb3-466d-9026-4c685922072f/ovnkube-controller/0.log" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.452792 4789 generic.go:334] "Generic (PLEG): container finished" podID="c6d361cd-fbb3-466d-9026-4c685922072f" containerID="955a3bd1c17a9abb17278636982b95e2af5da2d21aa9981776c102c57f0c1825" exitCode=1 Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.452856 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" event={"ID":"c6d361cd-fbb3-466d-9026-4c685922072f","Type":"ContainerDied","Data":"955a3bd1c17a9abb17278636982b95e2af5da2d21aa9981776c102c57f0c1825"} Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.452915 4789 scope.go:117] "RemoveContainer" containerID="e5ba041f3d56932dc730eccd02af156e610a234d52b947ce13ecea98369d97a8" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.454578 4789 scope.go:117] "RemoveContainer" containerID="955a3bd1c17a9abb17278636982b95e2af5da2d21aa9981776c102c57f0c1825" Nov 24 11:30:56 crc kubenswrapper[4789]: E1124 11:30:56.455031 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n4hd6_openshift-ovn-kubernetes(c6d361cd-fbb3-466d-9026-4c685922072f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.470683 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:56Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.494585 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6d361cd-fbb3-466d-9026-4c685922072f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955a3bd1c17a9abb17278636982b95e2af5da2d21aa9981776c102c57f0c1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5ba041f3d56932dc730eccd02af156e610a234d52b947ce13ecea98369d97a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:30:53Z\\\",\\\"message\\\":\\\" Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:30:53.609526 5991 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:30:53.609898 5991 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:30:53.609929 5991 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:30:53.610048 5991 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:30:53.610649 5991 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1124 11:30:53.610917 5991 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:30:53.611674 5991 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://955a3bd1c17a9abb17278636982b95e2af5da2d21aa9981776c102c57f0c1825\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:30:55Z\\\",\\\"message\\\":\\\"rt:false}}\\\\nI1124 11:30:55.515431 6151 services_controller.go:444] Built service openshift-marketplace/redhat-marketplace LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1124 11:30:55.515437 6151 services_controller.go:445] Built service openshift-marketplace/redhat-marketplace LB template configs for network=default: []services.lbConfig(nil)\\\\nF1124 11:30:55.515512 6151 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:55Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:30:55.515503 6151 services_controller.go:451] Built service openshift-marketplace/redhat-marketplace cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", E\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4hd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:56Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.506957 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vztqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da80bfe1-36b3-4239-bf6e-a855a490290a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17faecc8b835016ac0c8868de42de9b0990ce6399926e949f319fc4a26a3257b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nz8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vztqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:56Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.522647 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zthhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74a73ebd6641a79c50641db01a42eaf7842b9700926f302b4f5e938efa5d865f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpwcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zthhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:56Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.532272 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.532322 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.532340 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.532363 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.532379 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:56Z","lastTransitionTime":"2025-11-24T11:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.542209 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:56Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.558694 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fcd7ef8bfab3cbd56ad3f1df7b1d8aaf1459411f27649c7cd12dcde866d14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bbbf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:56Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.577447 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5fgg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"776a7cdb-6468-4e8a-8577-3535ff549781\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a9c256912e5f9308382925d83cd341ff711fdd9fce20f0c76d22f59033bfbf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ct4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5fgg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:56Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.590915 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c4a832-f0e4-481b-a474-3ecea86049f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb40689bf9e2d48e8dbd0827e82dc097464ab71edf0f871edc26ff8ed3508957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9czvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:56Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.606222 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc01f98a19f3885135cee8c8ee980f101ca61c40d316c0296bacfc3218400\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:56Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.626713 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:56Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.634096 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.634158 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.634167 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.634179 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.634188 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:56Z","lastTransitionTime":"2025-11-24T11:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.642847 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:56Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.656148 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422480a045454133a17132666976f8e5a564759ab1bf7668e41ad1663eb4bc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dce8b517d8f914c50b708fd7d66e6e3796768ded1a0bcb0c5f575f124844c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:56Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.667160 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b941dfb57d7894426efab65a2f2f6a0cbb524c48c0657d493eefe51923f30711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:56Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.684774 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:56Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.736881 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.736995 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.737011 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.737033 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.737049 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:56Z","lastTransitionTime":"2025-11-24T11:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.838886 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.838930 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.838944 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.838960 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.838972 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:56Z","lastTransitionTime":"2025-11-24T11:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.941404 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.941490 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.941503 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.941520 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:56 crc kubenswrapper[4789]: I1124 11:30:56.941535 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:56Z","lastTransitionTime":"2025-11-24T11:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.044544 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.044608 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.044620 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.044632 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.044659 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:57Z","lastTransitionTime":"2025-11-24T11:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.147505 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.147549 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.147562 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.147581 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.147598 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:57Z","lastTransitionTime":"2025-11-24T11:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.250997 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.251951 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.252114 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.252311 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.252551 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:57Z","lastTransitionTime":"2025-11-24T11:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.268236 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jz2zx"] Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.273047 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jz2zx" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.277872 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.280527 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.297761 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.314585 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.343987 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6d361cd-fbb3-466d-9026-4c685922072f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955a3bd1c17a9abb17278636982b95e2af5da2d21aa9981776c102c57f0c1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5ba041f3d56932dc730eccd02af156e610a234d52b947ce13ecea98369d97a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:30:53Z\\\",\\\"message\\\":\\\" Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:30:53.609526 5991 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:30:53.609898 5991 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:30:53.609929 5991 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:30:53.610048 5991 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:30:53.610649 5991 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1124 11:30:53.610917 5991 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:30:53.611674 5991 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://955a3bd1c17a9abb17278636982b95e2af5da2d21aa9981776c102c57f0c1825\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:30:55Z\\\",\\\"message\\\":\\\"rt:false}}\\\\nI1124 11:30:55.515431 6151 services_controller.go:444] Built service openshift-marketplace/redhat-marketplace LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1124 11:30:55.515437 6151 services_controller.go:445] Built service openshift-marketplace/redhat-marketplace LB template configs for network=default: []services.lbConfig(nil)\\\\nF1124 11:30:55.515512 6151 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:55Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:30:55.515503 6151 services_controller.go:451] Built service openshift-marketplace/redhat-marketplace cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", E\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4hd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.355189 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.355242 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.355256 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.355272 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.355284 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:57Z","lastTransitionTime":"2025-11-24T11:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.359112 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vztqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da80bfe1-36b3-4239-bf6e-a855a490290a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17faecc8b835016ac0c8868de42de9b0990ce6399926e949f319fc4a26a3257b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nz8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vztqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.373063 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zthhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74a73ebd6641a79c50641db01a42eaf7842b9700926f302b4f5e938efa5d865f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpwcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zthhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.389359 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.400736 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmkqg\" (UniqueName: \"kubernetes.io/projected/7c88057c-782b-4cc3-8243-828d959f4434-kube-api-access-dmkqg\") pod \"ovnkube-control-plane-749d76644c-jz2zx\" (UID: \"7c88057c-782b-4cc3-8243-828d959f4434\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jz2zx" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.400804 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7c88057c-782b-4cc3-8243-828d959f4434-env-overrides\") pod \"ovnkube-control-plane-749d76644c-jz2zx\" (UID: \"7c88057c-782b-4cc3-8243-828d959f4434\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jz2zx" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.400870 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7c88057c-782b-4cc3-8243-828d959f4434-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-jz2zx\" (UID: \"7c88057c-782b-4cc3-8243-828d959f4434\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jz2zx" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.400895 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7c88057c-782b-4cc3-8243-828d959f4434-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-jz2zx\" (UID: \"7c88057c-782b-4cc3-8243-828d959f4434\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jz2zx" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.410791 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fcd7ef8bfab3cbd56ad3f1df7b1d8aaf1459411f27649c7cd12dcde866d14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bbbf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.425237 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jz2zx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c88057c-782b-4cc3-8243-828d959f4434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmkqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmkqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jz2zx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.440128 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc01f98a19f3885135cee8c8ee980f101ca61c40d316c0296bacfc3218400\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.456620 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.456658 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.456668 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.456679 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.456688 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:57Z","lastTransitionTime":"2025-11-24T11:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.458507 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4hd6_c6d361cd-fbb3-466d-9026-4c685922072f/ovnkube-controller/1.log" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.458881 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.462212 4789 scope.go:117] "RemoveContainer" containerID="955a3bd1c17a9abb17278636982b95e2af5da2d21aa9981776c102c57f0c1825" Nov 24 11:30:57 crc kubenswrapper[4789]: E1124 11:30:57.462380 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n4hd6_openshift-ovn-kubernetes(c6d361cd-fbb3-466d-9026-4c685922072f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.476996 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.491747 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422480a045454133a17132666976f8e5a564759ab1bf7668e41ad1663eb4bc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dce8b517d8f914c50b708fd7d66e6e3796768ded1a0bcb0c5f575f124844c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.502184 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7c88057c-782b-4cc3-8243-828d959f4434-env-overrides\") pod \"ovnkube-control-plane-749d76644c-jz2zx\" (UID: \"7c88057c-782b-4cc3-8243-828d959f4434\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jz2zx" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.502590 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7c88057c-782b-4cc3-8243-828d959f4434-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-jz2zx\" (UID: \"7c88057c-782b-4cc3-8243-828d959f4434\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jz2zx" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.502719 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7c88057c-782b-4cc3-8243-828d959f4434-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-jz2zx\" (UID: \"7c88057c-782b-4cc3-8243-828d959f4434\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jz2zx" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.502906 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmkqg\" (UniqueName: \"kubernetes.io/projected/7c88057c-782b-4cc3-8243-828d959f4434-kube-api-access-dmkqg\") pod \"ovnkube-control-plane-749d76644c-jz2zx\" (UID: \"7c88057c-782b-4cc3-8243-828d959f4434\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jz2zx" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.503280 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7c88057c-782b-4cc3-8243-828d959f4434-env-overrides\") pod \"ovnkube-control-plane-749d76644c-jz2zx\" (UID: \"7c88057c-782b-4cc3-8243-828d959f4434\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jz2zx" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.503289 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7c88057c-782b-4cc3-8243-828d959f4434-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-jz2zx\" (UID: \"7c88057c-782b-4cc3-8243-828d959f4434\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jz2zx" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.513971 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b941dfb57d7894426efab65a2f2f6a0cbb524c48c0657d493eefe51923f30711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.515430 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7c88057c-782b-4cc3-8243-828d959f4434-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-jz2zx\" (UID: \"7c88057c-782b-4cc3-8243-828d959f4434\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jz2zx" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.528964 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5fgg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"776a7cdb-6468-4e8a-8577-3535ff549781\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a9c256912e5f9308382925d83cd341ff711fdd9fce20f0c76d22f59033bfbf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ct4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5fgg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.536912 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmkqg\" (UniqueName: \"kubernetes.io/projected/7c88057c-782b-4cc3-8243-828d959f4434-kube-api-access-dmkqg\") pod \"ovnkube-control-plane-749d76644c-jz2zx\" (UID: \"7c88057c-782b-4cc3-8243-828d959f4434\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jz2zx" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.540543 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c4a832-f0e4-481b-a474-3ecea86049f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb40689bf9e2d48e8dbd0827e82dc097464ab71edf0f871edc26ff8ed3508957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9czvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.555012 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.558878 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.558915 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.558926 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.558942 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.558953 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:57Z","lastTransitionTime":"2025-11-24T11:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.566584 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.589750 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jz2zx" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.597413 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6d361cd-fbb3-466d-9026-4c685922072f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955a3bd1c17a9abb17278636982b95e2af5da2d21aa9981776c102c57f0c1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://955a3bd1c17a9abb17278636982b95e2af5da2d21aa9981776c102c57f0c1825\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:30:55Z\\\",\\\"message\\\":\\\"rt:false}}\\\\nI1124 11:30:55.515431 6151 services_controller.go:444] Built service openshift-marketplace/redhat-marketplace LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1124 11:30:55.515437 6151 services_controller.go:445] Built service openshift-marketplace/redhat-marketplace LB template configs for network=default: []services.lbConfig(nil)\\\\nF1124 11:30:55.515512 6151 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:55Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:30:55.515503 6151 services_controller.go:451] Built service openshift-marketplace/redhat-marketplace cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", E\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n4hd6_openshift-ovn-kubernetes(c6d361cd-fbb3-466d-9026-4c685922072f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4hd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:57 crc kubenswrapper[4789]: W1124 11:30:57.605067 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c88057c_782b_4cc3_8243_828d959f4434.slice/crio-096360d161351d7ecd66b281f00403c9e28f90e42fe132d31ab9317c30bf5a97 WatchSource:0}: Error finding container 096360d161351d7ecd66b281f00403c9e28f90e42fe132d31ab9317c30bf5a97: Status 404 returned error can't find the container with id 096360d161351d7ecd66b281f00403c9e28f90e42fe132d31ab9317c30bf5a97 Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.618223 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vztqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da80bfe1-36b3-4239-bf6e-a855a490290a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17faecc8b835016ac0c8868de42de9b0990ce6399926e949f319fc4a26a3257b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nz8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vztqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.632528 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zthhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74a73ebd6641a79c50641db01a42eaf7842b9700926f302b4f5e938efa5d865f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpwcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zthhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.656121 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.661044 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.661088 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.661099 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.661115 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.661128 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:57Z","lastTransitionTime":"2025-11-24T11:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.678233 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fcd7ef8bfab3cbd56ad3f1df7b1d8aaf1459411f27649c7cd12dcde866d14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bbbf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.688689 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jz2zx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c88057c-782b-4cc3-8243-828d959f4434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmkqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmkqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jz2zx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.702783 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5fgg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"776a7cdb-6468-4e8a-8577-3535ff549781\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a9c256912e5f9308382925d83cd341ff711fdd9fce20f0c76d22f59033bfbf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ct4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5fgg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.714943 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c4a832-f0e4-481b-a474-3ecea86049f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb40689bf9e2d48e8dbd0827e82dc097464ab71edf0f871edc26ff8ed3508957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9czvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.727971 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc01f98a19f3885135cee8c8ee980f101ca61c40d316c0296bacfc3218400\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.742229 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.759407 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.762866 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.763005 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.763090 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.763183 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.763270 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:57Z","lastTransitionTime":"2025-11-24T11:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.770182 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422480a045454133a17132666976f8e5a564759ab1bf7668e41ad1663eb4bc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dce8b517d8f914c50b708fd7d66e6e3796768ded1a0bcb0c5f575f124844c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.781153 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b941dfb57d7894426efab65a2f2f6a0cbb524c48c0657d493eefe51923f30711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.866292 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.866334 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.866344 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.866360 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.866369 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:57Z","lastTransitionTime":"2025-11-24T11:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.968950 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.969269 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.969279 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.969293 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:57 crc kubenswrapper[4789]: I1124 11:30:57.969301 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:57Z","lastTransitionTime":"2025-11-24T11:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.071860 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.071904 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.071917 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.071933 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.071946 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:58Z","lastTransitionTime":"2025-11-24T11:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.169584 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:30:58 crc kubenswrapper[4789]: E1124 11:30:58.169701 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.169748 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.169784 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:30:58 crc kubenswrapper[4789]: E1124 11:30:58.169825 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:30:58 crc kubenswrapper[4789]: E1124 11:30:58.169925 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.173569 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.173593 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.173600 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.173610 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.173618 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:58Z","lastTransitionTime":"2025-11-24T11:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.182632 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.193690 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.210958 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6d361cd-fbb3-466d-9026-4c685922072f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955a3bd1c17a9abb17278636982b95e2af5da2d21aa9981776c102c57f0c1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://955a3bd1c17a9abb17278636982b95e2af5da2d21aa9981776c102c57f0c1825\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:30:55Z\\\",\\\"message\\\":\\\"rt:false}}\\\\nI1124 11:30:55.515431 6151 services_controller.go:444] Built service openshift-marketplace/redhat-marketplace LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1124 11:30:55.515437 6151 services_controller.go:445] Built service openshift-marketplace/redhat-marketplace LB template configs for network=default: []services.lbConfig(nil)\\\\nF1124 11:30:55.515512 6151 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:55Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:30:55.515503 6151 services_controller.go:451] Built service openshift-marketplace/redhat-marketplace cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", E\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n4hd6_openshift-ovn-kubernetes(c6d361cd-fbb3-466d-9026-4c685922072f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4hd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.220140 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vztqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da80bfe1-36b3-4239-bf6e-a855a490290a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17faecc8b835016ac0c8868de42de9b0990ce6399926e949f319fc4a26a3257b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nz8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vztqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.230565 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zthhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74a73ebd6641a79c50641db01a42eaf7842b9700926f302b4f5e938efa5d865f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpwcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zthhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.249414 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.267154 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fcd7ef8bfab3cbd56ad3f1df7b1d8aaf1459411f27649c7cd12dcde866d14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bbbf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.275983 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.276034 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.276049 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.276069 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.276084 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:58Z","lastTransitionTime":"2025-11-24T11:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.280207 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jz2zx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c88057c-782b-4cc3-8243-828d959f4434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmkqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmkqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jz2zx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.294193 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc01f98a19f3885135cee8c8ee980f101ca61c40d316c0296bacfc3218400\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.304877 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.317007 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.329677 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422480a045454133a17132666976f8e5a564759ab1bf7668e41ad1663eb4bc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dce8b517d8f914c50b708fd7d66e6e3796768ded1a0bcb0c5f575f124844c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.341894 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b941dfb57d7894426efab65a2f2f6a0cbb524c48c0657d493eefe51923f30711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.362222 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5fgg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"776a7cdb-6468-4e8a-8577-3535ff549781\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a9c256912e5f9308382925d83cd341ff711fdd9fce20f0c76d22f59033bfbf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ct4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5fgg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.380916 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.380968 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.380981 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.380999 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.381011 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:58Z","lastTransitionTime":"2025-11-24T11:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.384240 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c4a832-f0e4-481b-a474-3ecea86049f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb40689bf9e2d48e8dbd0827e82dc097464ab71edf0f871edc26ff8ed3508957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9czvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.398996 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-s69rz"] Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.399516 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:30:58 crc kubenswrapper[4789]: E1124 11:30:58.399583 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.416669 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.431695 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fcd7ef8bfab3cbd56ad3f1df7b1d8aaf1459411f27649c7cd12dcde866d14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bbbf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.443722 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jz2zx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c88057c-782b-4cc3-8243-828d959f4434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmkqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmkqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jz2zx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.467352 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jz2zx" event={"ID":"7c88057c-782b-4cc3-8243-828d959f4434","Type":"ContainerStarted","Data":"b792d376da032b1887743c253b0109f14b255a30ef15032b261605d07de2f0a0"} Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.467404 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jz2zx" event={"ID":"7c88057c-782b-4cc3-8243-828d959f4434","Type":"ContainerStarted","Data":"a8b2f85ae9f76d8adf40a2018100916e9aace7877f1f10f26a147088cf44898d"} Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.467418 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jz2zx" event={"ID":"7c88057c-782b-4cc3-8243-828d959f4434","Type":"ContainerStarted","Data":"096360d161351d7ecd66b281f00403c9e28f90e42fe132d31ab9317c30bf5a97"} Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.483397 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b941dfb57d7894426efab65a2f2f6a0cbb524c48c0657d493eefe51923f30711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.483676 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.483695 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.483706 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.483723 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.483735 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:58Z","lastTransitionTime":"2025-11-24T11:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.503574 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5fgg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"776a7cdb-6468-4e8a-8577-3535ff549781\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a9c256912e5f9308382925d83cd341ff711fdd9fce20f0c76d22f59033bfbf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ct4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5fgg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.522814 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c4a832-f0e4-481b-a474-3ecea86049f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb40689bf9e2d48e8dbd0827e82dc097464ab71edf0f871edc26ff8ed3508957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9czvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.526002 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1033d5e6-680c-4193-aade-8c3d801b0e3f-metrics-certs\") pod \"network-metrics-daemon-s69rz\" (UID: \"1033d5e6-680c-4193-aade-8c3d801b0e3f\") " pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.526053 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h5sw\" (UniqueName: \"kubernetes.io/projected/1033d5e6-680c-4193-aade-8c3d801b0e3f-kube-api-access-2h5sw\") pod \"network-metrics-daemon-s69rz\" (UID: \"1033d5e6-680c-4193-aade-8c3d801b0e3f\") " pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.545843 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc01f98a19f3885135cee8c8ee980f101ca61c40d316c0296bacfc3218400\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.559101 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.573272 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.586116 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.586381 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.586445 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.586529 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.586599 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:58Z","lastTransitionTime":"2025-11-24T11:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.591073 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422480a045454133a17132666976f8e5a564759ab1bf7668e41ad1663eb4bc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dce8b517d8f914c50b708fd7d66e6e3796768ded1a0bcb0c5f575f124844c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.604952 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.618237 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-s69rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1033d5e6-680c-4193-aade-8c3d801b0e3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h5sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h5sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-s69rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.626916 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h5sw\" (UniqueName: \"kubernetes.io/projected/1033d5e6-680c-4193-aade-8c3d801b0e3f-kube-api-access-2h5sw\") pod \"network-metrics-daemon-s69rz\" (UID: \"1033d5e6-680c-4193-aade-8c3d801b0e3f\") " pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.627053 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1033d5e6-680c-4193-aade-8c3d801b0e3f-metrics-certs\") pod \"network-metrics-daemon-s69rz\" (UID: \"1033d5e6-680c-4193-aade-8c3d801b0e3f\") " pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:30:58 crc kubenswrapper[4789]: E1124 11:30:58.627141 4789 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:30:58 crc kubenswrapper[4789]: E1124 11:30:58.627186 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1033d5e6-680c-4193-aade-8c3d801b0e3f-metrics-certs podName:1033d5e6-680c-4193-aade-8c3d801b0e3f nodeName:}" failed. No retries permitted until 2025-11-24 11:30:59.12717081 +0000 UTC m=+41.709642189 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1033d5e6-680c-4193-aade-8c3d801b0e3f-metrics-certs") pod "network-metrics-daemon-s69rz" (UID: "1033d5e6-680c-4193-aade-8c3d801b0e3f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.634596 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.645537 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h5sw\" (UniqueName: \"kubernetes.io/projected/1033d5e6-680c-4193-aade-8c3d801b0e3f-kube-api-access-2h5sw\") pod \"network-metrics-daemon-s69rz\" (UID: \"1033d5e6-680c-4193-aade-8c3d801b0e3f\") " pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.662479 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6d361cd-fbb3-466d-9026-4c685922072f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955a3bd1c17a9abb17278636982b95e2af5da2d21aa9981776c102c57f0c1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://955a3bd1c17a9abb17278636982b95e2af5da2d21aa9981776c102c57f0c1825\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:30:55Z\\\",\\\"message\\\":\\\"rt:false}}\\\\nI1124 11:30:55.515431 6151 services_controller.go:444] Built service openshift-marketplace/redhat-marketplace LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1124 11:30:55.515437 6151 services_controller.go:445] Built service openshift-marketplace/redhat-marketplace LB template configs for network=default: []services.lbConfig(nil)\\\\nF1124 11:30:55.515512 6151 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:55Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:30:55.515503 6151 services_controller.go:451] Built service openshift-marketplace/redhat-marketplace cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", E\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n4hd6_openshift-ovn-kubernetes(c6d361cd-fbb3-466d-9026-4c685922072f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4hd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.675947 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vztqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da80bfe1-36b3-4239-bf6e-a855a490290a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17faecc8b835016ac0c8868de42de9b0990ce6399926e949f319fc4a26a3257b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nz8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vztqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.686847 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zthhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74a73ebd6641a79c50641db01a42eaf7842b9700926f302b4f5e938efa5d865f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpwcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zthhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.688421 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.688450 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.688483 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.688498 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.688511 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:58Z","lastTransitionTime":"2025-11-24T11:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.703139 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.721416 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6d361cd-fbb3-466d-9026-4c685922072f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955a3bd1c17a9abb17278636982b95e2af5da2d21aa9981776c102c57f0c1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://955a3bd1c17a9abb17278636982b95e2af5da2d21aa9981776c102c57f0c1825\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:30:55Z\\\",\\\"message\\\":\\\"rt:false}}\\\\nI1124 11:30:55.515431 6151 services_controller.go:444] Built service openshift-marketplace/redhat-marketplace LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1124 11:30:55.515437 6151 services_controller.go:445] Built service openshift-marketplace/redhat-marketplace LB template configs for network=default: []services.lbConfig(nil)\\\\nF1124 11:30:55.515512 6151 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:55Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:30:55.515503 6151 services_controller.go:451] Built service openshift-marketplace/redhat-marketplace cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", E\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n4hd6_openshift-ovn-kubernetes(c6d361cd-fbb3-466d-9026-4c685922072f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4hd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.731133 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vztqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da80bfe1-36b3-4239-bf6e-a855a490290a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17faecc8b835016ac0c8868de42de9b0990ce6399926e949f319fc4a26a3257b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nz8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vztqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.743013 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zthhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74a73ebd6641a79c50641db01a42eaf7842b9700926f302b4f5e938efa5d865f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpwcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zthhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.753833 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-s69rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1033d5e6-680c-4193-aade-8c3d801b0e3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h5sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h5sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-s69rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.764361 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.786711 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fcd7ef8bfab3cbd56ad3f1df7b1d8aaf1459411f27649c7cd12dcde866d14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bbbf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.791345 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.791523 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.791608 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.791694 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.791770 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:58Z","lastTransitionTime":"2025-11-24T11:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.801864 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jz2zx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c88057c-782b-4cc3-8243-828d959f4434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8b2f85ae9f76d8adf40a2018100916e9aace7877f1f10f26a147088cf44898d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmkqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b792d376da032b1887743c253b0109f14b255a30ef15032b261605d07de2f0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmkqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jz2zx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.821852 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc01f98a19f3885135cee8c8ee980f101ca61c40d316c0296bacfc3218400\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.837383 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.854930 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.873736 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422480a045454133a17132666976f8e5a564759ab1bf7668e41ad1663eb4bc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dce8b517d8f914c50b708fd7d66e6e3796768ded1a0bcb0c5f575f124844c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.889140 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b941dfb57d7894426efab65a2f2f6a0cbb524c48c0657d493eefe51923f30711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.893860 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.894000 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.894024 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.894056 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.894079 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:58Z","lastTransitionTime":"2025-11-24T11:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.900999 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5fgg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"776a7cdb-6468-4e8a-8577-3535ff549781\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a9c256912e5f9308382925d83cd341ff711fdd9fce20f0c76d22f59033bfbf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ct4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5fgg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.913890 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c4a832-f0e4-481b-a474-3ecea86049f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb40689bf9e2d48e8dbd0827e82dc097464ab71edf0f871edc26ff8ed3508957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9czvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.925545 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.997012 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.997107 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.997131 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.997160 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:58 crc kubenswrapper[4789]: I1124 11:30:58.997181 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:58Z","lastTransitionTime":"2025-11-24T11:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.099691 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.099735 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.099745 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.099760 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.099771 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:59Z","lastTransitionTime":"2025-11-24T11:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.131644 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1033d5e6-680c-4193-aade-8c3d801b0e3f-metrics-certs\") pod \"network-metrics-daemon-s69rz\" (UID: \"1033d5e6-680c-4193-aade-8c3d801b0e3f\") " pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:30:59 crc kubenswrapper[4789]: E1124 11:30:59.131832 4789 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:30:59 crc kubenswrapper[4789]: E1124 11:30:59.131898 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1033d5e6-680c-4193-aade-8c3d801b0e3f-metrics-certs podName:1033d5e6-680c-4193-aade-8c3d801b0e3f nodeName:}" failed. No retries permitted until 2025-11-24 11:31:00.131877219 +0000 UTC m=+42.714348608 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1033d5e6-680c-4193-aade-8c3d801b0e3f-metrics-certs") pod "network-metrics-daemon-s69rz" (UID: "1033d5e6-680c-4193-aade-8c3d801b0e3f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.202587 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.202662 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.202686 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.202716 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.202740 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:59Z","lastTransitionTime":"2025-11-24T11:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.306392 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.306438 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.306449 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.306482 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.306497 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:59Z","lastTransitionTime":"2025-11-24T11:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.409628 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.409694 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.409716 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.409741 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.409790 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:59Z","lastTransitionTime":"2025-11-24T11:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.513060 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.513163 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.513185 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.513208 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.513278 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:59Z","lastTransitionTime":"2025-11-24T11:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.616522 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.616595 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.616615 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.616640 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.616659 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:59Z","lastTransitionTime":"2025-11-24T11:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.719264 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.719560 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.719645 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.719755 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.720080 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:59Z","lastTransitionTime":"2025-11-24T11:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.823236 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.823669 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.823878 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.824139 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.824363 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:59Z","lastTransitionTime":"2025-11-24T11:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.926986 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.927031 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.927044 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.927061 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:30:59 crc kubenswrapper[4789]: I1124 11:30:59.927072 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:30:59Z","lastTransitionTime":"2025-11-24T11:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.029502 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.029584 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.029604 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.029631 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.029649 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:00Z","lastTransitionTime":"2025-11-24T11:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.132646 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.132734 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.132760 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.132790 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.132813 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:00Z","lastTransitionTime":"2025-11-24T11:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.143319 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1033d5e6-680c-4193-aade-8c3d801b0e3f-metrics-certs\") pod \"network-metrics-daemon-s69rz\" (UID: \"1033d5e6-680c-4193-aade-8c3d801b0e3f\") " pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:31:00 crc kubenswrapper[4789]: E1124 11:31:00.143540 4789 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:31:00 crc kubenswrapper[4789]: E1124 11:31:00.143641 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1033d5e6-680c-4193-aade-8c3d801b0e3f-metrics-certs podName:1033d5e6-680c-4193-aade-8c3d801b0e3f nodeName:}" failed. No retries permitted until 2025-11-24 11:31:02.143619286 +0000 UTC m=+44.726090755 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1033d5e6-680c-4193-aade-8c3d801b0e3f-metrics-certs") pod "network-metrics-daemon-s69rz" (UID: "1033d5e6-680c-4193-aade-8c3d801b0e3f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.168593 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.168611 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.168760 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:31:00 crc kubenswrapper[4789]: E1124 11:31:00.169293 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.168807 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:31:00 crc kubenswrapper[4789]: E1124 11:31:00.169436 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:31:00 crc kubenswrapper[4789]: E1124 11:31:00.169634 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:31:00 crc kubenswrapper[4789]: E1124 11:31:00.170145 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.235674 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.235930 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.236002 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.236061 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.236123 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:00Z","lastTransitionTime":"2025-11-24T11:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.339553 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.340283 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.340375 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.340499 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.340580 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:00Z","lastTransitionTime":"2025-11-24T11:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.443183 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.443390 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.443547 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.443644 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.443702 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:00Z","lastTransitionTime":"2025-11-24T11:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.546890 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.547177 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.547335 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.547493 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.547642 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:00Z","lastTransitionTime":"2025-11-24T11:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.651115 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.651171 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.651192 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.651218 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.651238 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:00Z","lastTransitionTime":"2025-11-24T11:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.754255 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.754294 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.754305 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.754338 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.754349 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:00Z","lastTransitionTime":"2025-11-24T11:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.857250 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.857324 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.857345 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.857372 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.857394 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:00Z","lastTransitionTime":"2025-11-24T11:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.960005 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.960056 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.960067 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.960085 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:00 crc kubenswrapper[4789]: I1124 11:31:00.960097 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:00Z","lastTransitionTime":"2025-11-24T11:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.062355 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.062636 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.062719 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.062822 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.062895 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:01Z","lastTransitionTime":"2025-11-24T11:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.165657 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.165714 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.165729 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.165751 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.165798 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:01Z","lastTransitionTime":"2025-11-24T11:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.267933 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.267976 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.267987 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.268001 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.268011 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:01Z","lastTransitionTime":"2025-11-24T11:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.371983 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.372037 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.372074 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.372095 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.372109 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:01Z","lastTransitionTime":"2025-11-24T11:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.474934 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.475003 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.475024 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.475051 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.475073 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:01Z","lastTransitionTime":"2025-11-24T11:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.578609 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.578656 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.578668 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.578684 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.578696 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:01Z","lastTransitionTime":"2025-11-24T11:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.681910 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.681960 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.681975 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.681998 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.682015 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:01Z","lastTransitionTime":"2025-11-24T11:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.784631 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.784696 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.784713 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.784739 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.784757 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:01Z","lastTransitionTime":"2025-11-24T11:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.888570 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.888668 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.888709 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.888747 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.888786 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:01Z","lastTransitionTime":"2025-11-24T11:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.992145 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.992183 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.992194 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.992271 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:01 crc kubenswrapper[4789]: I1124 11:31:01.992285 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:01Z","lastTransitionTime":"2025-11-24T11:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.095505 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.095580 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.095604 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.095632 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.095648 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:02Z","lastTransitionTime":"2025-11-24T11:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.163880 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1033d5e6-680c-4193-aade-8c3d801b0e3f-metrics-certs\") pod \"network-metrics-daemon-s69rz\" (UID: \"1033d5e6-680c-4193-aade-8c3d801b0e3f\") " pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:31:02 crc kubenswrapper[4789]: E1124 11:31:02.164257 4789 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:31:02 crc kubenswrapper[4789]: E1124 11:31:02.164387 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1033d5e6-680c-4193-aade-8c3d801b0e3f-metrics-certs podName:1033d5e6-680c-4193-aade-8c3d801b0e3f nodeName:}" failed. No retries permitted until 2025-11-24 11:31:06.164353521 +0000 UTC m=+48.746824940 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1033d5e6-680c-4193-aade-8c3d801b0e3f-metrics-certs") pod "network-metrics-daemon-s69rz" (UID: "1033d5e6-680c-4193-aade-8c3d801b0e3f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.169212 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.169221 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:31:02 crc kubenswrapper[4789]: E1124 11:31:02.169391 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.169231 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:31:02 crc kubenswrapper[4789]: E1124 11:31:02.169504 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:31:02 crc kubenswrapper[4789]: E1124 11:31:02.169544 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.169527 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:31:02 crc kubenswrapper[4789]: E1124 11:31:02.170192 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.198278 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.198331 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.198347 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.198366 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.198380 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:02Z","lastTransitionTime":"2025-11-24T11:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.300248 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.300302 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.300316 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.300336 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.300393 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:02Z","lastTransitionTime":"2025-11-24T11:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.403077 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.403153 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.403169 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.403190 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.403206 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:02Z","lastTransitionTime":"2025-11-24T11:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.505521 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.505559 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.505571 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.505589 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.505602 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:02Z","lastTransitionTime":"2025-11-24T11:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.607634 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.607673 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.607685 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.607700 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.607711 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:02Z","lastTransitionTime":"2025-11-24T11:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.710033 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.710098 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.710107 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.710122 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.710135 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:02Z","lastTransitionTime":"2025-11-24T11:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.813550 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.813621 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.813646 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.813675 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.813696 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:02Z","lastTransitionTime":"2025-11-24T11:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.915704 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.915753 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.915768 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.915792 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:02 crc kubenswrapper[4789]: I1124 11:31:02.915809 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:02Z","lastTransitionTime":"2025-11-24T11:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.018040 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.018094 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.018105 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.018125 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.018137 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:03Z","lastTransitionTime":"2025-11-24T11:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.120257 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.120306 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.120320 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.120336 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.120349 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:03Z","lastTransitionTime":"2025-11-24T11:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.222901 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.223009 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.223030 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.223096 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.223113 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:03Z","lastTransitionTime":"2025-11-24T11:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.326643 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.326702 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.326718 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.326740 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.326760 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:03Z","lastTransitionTime":"2025-11-24T11:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.430039 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.430100 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.430116 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.430141 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.430157 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:03Z","lastTransitionTime":"2025-11-24T11:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.536743 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.536794 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.536817 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.536840 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.536856 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:03Z","lastTransitionTime":"2025-11-24T11:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.640592 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.640751 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.640777 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.640807 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.640827 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:03Z","lastTransitionTime":"2025-11-24T11:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.744377 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.744523 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.744550 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.744728 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.744832 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:03Z","lastTransitionTime":"2025-11-24T11:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.849081 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.849135 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.849151 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.849174 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.849191 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:03Z","lastTransitionTime":"2025-11-24T11:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.951779 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.951841 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.951853 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.951868 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:03 crc kubenswrapper[4789]: I1124 11:31:03.951881 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:03Z","lastTransitionTime":"2025-11-24T11:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.054192 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.054240 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.054252 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.054267 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.054649 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:04Z","lastTransitionTime":"2025-11-24T11:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.157872 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.157937 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.157958 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.157987 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.158011 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:04Z","lastTransitionTime":"2025-11-24T11:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.169227 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:31:04 crc kubenswrapper[4789]: E1124 11:31:04.169402 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.169703 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:31:04 crc kubenswrapper[4789]: E1124 11:31:04.169824 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.169841 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.169862 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:31:04 crc kubenswrapper[4789]: E1124 11:31:04.169967 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:31:04 crc kubenswrapper[4789]: E1124 11:31:04.170056 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.260638 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.260679 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.260687 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.260701 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.260710 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:04Z","lastTransitionTime":"2025-11-24T11:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.363182 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.363246 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.363270 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.363302 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.363323 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:04Z","lastTransitionTime":"2025-11-24T11:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.444387 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.444495 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.444520 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.444548 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.444570 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:04Z","lastTransitionTime":"2025-11-24T11:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:04 crc kubenswrapper[4789]: E1124 11:31:04.465513 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4376b485-9285-482b-9f4e-acdea532ff82\\\",\\\"systemUUID\\\":\\\"48941845-60e3-4de0-ba49-51eec51285bb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:04Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.470584 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.470612 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.470622 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.470637 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.470650 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:04Z","lastTransitionTime":"2025-11-24T11:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:04 crc kubenswrapper[4789]: E1124 11:31:04.492568 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4376b485-9285-482b-9f4e-acdea532ff82\\\",\\\"systemUUID\\\":\\\"48941845-60e3-4de0-ba49-51eec51285bb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:04Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.498093 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.498180 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.498229 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.498253 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.498302 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:04Z","lastTransitionTime":"2025-11-24T11:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:04 crc kubenswrapper[4789]: E1124 11:31:04.521870 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4376b485-9285-482b-9f4e-acdea532ff82\\\",\\\"systemUUID\\\":\\\"48941845-60e3-4de0-ba49-51eec51285bb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:04Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.527174 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.527227 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.527246 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.527343 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.527406 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:04Z","lastTransitionTime":"2025-11-24T11:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:04 crc kubenswrapper[4789]: E1124 11:31:04.548069 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4376b485-9285-482b-9f4e-acdea532ff82\\\",\\\"systemUUID\\\":\\\"48941845-60e3-4de0-ba49-51eec51285bb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:04Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.552816 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.552871 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.552887 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.552907 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.552925 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:04Z","lastTransitionTime":"2025-11-24T11:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:04 crc kubenswrapper[4789]: E1124 11:31:04.573718 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4376b485-9285-482b-9f4e-acdea532ff82\\\",\\\"systemUUID\\\":\\\"48941845-60e3-4de0-ba49-51eec51285bb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:04Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:04 crc kubenswrapper[4789]: E1124 11:31:04.574013 4789 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.576005 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.576115 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.576135 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.576156 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.576172 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:04Z","lastTransitionTime":"2025-11-24T11:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.679261 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.679309 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.679320 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.679336 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.679349 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:04Z","lastTransitionTime":"2025-11-24T11:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.782699 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.782753 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.782765 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.782781 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.782793 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:04Z","lastTransitionTime":"2025-11-24T11:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.885623 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.885697 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.885721 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.885751 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.885778 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:04Z","lastTransitionTime":"2025-11-24T11:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.988443 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.988550 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.988569 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.988594 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:04 crc kubenswrapper[4789]: I1124 11:31:04.988612 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:04Z","lastTransitionTime":"2025-11-24T11:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.091580 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.091654 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.091672 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.091695 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.091733 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:05Z","lastTransitionTime":"2025-11-24T11:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.194785 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.194844 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.194854 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.194870 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.194880 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:05Z","lastTransitionTime":"2025-11-24T11:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.298399 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.298448 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.298489 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.298522 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.298538 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:05Z","lastTransitionTime":"2025-11-24T11:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.401157 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.401212 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.401225 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.401241 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.401255 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:05Z","lastTransitionTime":"2025-11-24T11:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.503541 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.503580 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.503591 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.503607 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.503617 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:05Z","lastTransitionTime":"2025-11-24T11:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.605645 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.605684 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.605720 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.605736 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.605748 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:05Z","lastTransitionTime":"2025-11-24T11:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.707967 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.708009 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.708024 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.708044 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.708057 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:05Z","lastTransitionTime":"2025-11-24T11:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.811885 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.812222 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.812369 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.812586 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.812788 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:05Z","lastTransitionTime":"2025-11-24T11:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.915584 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.915835 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.915921 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.916044 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:05 crc kubenswrapper[4789]: I1124 11:31:05.916122 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:05Z","lastTransitionTime":"2025-11-24T11:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.019365 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.019443 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.019530 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.019559 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.019584 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:06Z","lastTransitionTime":"2025-11-24T11:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.122699 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.122763 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.122782 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.122807 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.122825 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:06Z","lastTransitionTime":"2025-11-24T11:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.169229 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:31:06 crc kubenswrapper[4789]: E1124 11:31:06.169742 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.169375 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:31:06 crc kubenswrapper[4789]: E1124 11:31:06.170149 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.169229 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.169406 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:31:06 crc kubenswrapper[4789]: E1124 11:31:06.170688 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:31:06 crc kubenswrapper[4789]: E1124 11:31:06.170520 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.209362 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1033d5e6-680c-4193-aade-8c3d801b0e3f-metrics-certs\") pod \"network-metrics-daemon-s69rz\" (UID: \"1033d5e6-680c-4193-aade-8c3d801b0e3f\") " pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:31:06 crc kubenswrapper[4789]: E1124 11:31:06.209571 4789 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:31:06 crc kubenswrapper[4789]: E1124 11:31:06.209738 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1033d5e6-680c-4193-aade-8c3d801b0e3f-metrics-certs podName:1033d5e6-680c-4193-aade-8c3d801b0e3f nodeName:}" failed. No retries permitted until 2025-11-24 11:31:14.209703586 +0000 UTC m=+56.792174965 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1033d5e6-680c-4193-aade-8c3d801b0e3f-metrics-certs") pod "network-metrics-daemon-s69rz" (UID: "1033d5e6-680c-4193-aade-8c3d801b0e3f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.224874 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.224922 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.224938 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.224958 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.224971 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:06Z","lastTransitionTime":"2025-11-24T11:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.327683 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.327734 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.327751 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.327770 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.327786 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:06Z","lastTransitionTime":"2025-11-24T11:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.430693 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.430741 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.430756 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.430777 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.430792 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:06Z","lastTransitionTime":"2025-11-24T11:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.533258 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.533330 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.533354 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.533382 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.533404 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:06Z","lastTransitionTime":"2025-11-24T11:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.635293 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.635325 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.635332 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.635346 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.635355 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:06Z","lastTransitionTime":"2025-11-24T11:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.737788 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.737825 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.737837 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.737850 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.737858 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:06Z","lastTransitionTime":"2025-11-24T11:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.840473 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.840722 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.840788 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.840864 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.840934 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:06Z","lastTransitionTime":"2025-11-24T11:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.942909 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.942949 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.942961 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.942977 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:06 crc kubenswrapper[4789]: I1124 11:31:06.942988 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:06Z","lastTransitionTime":"2025-11-24T11:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.045609 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.045653 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.045664 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.045679 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.045690 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:07Z","lastTransitionTime":"2025-11-24T11:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.149052 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.149431 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.149711 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.150078 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.150249 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:07Z","lastTransitionTime":"2025-11-24T11:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.253305 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.253349 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.253360 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.253377 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.253389 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:07Z","lastTransitionTime":"2025-11-24T11:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.356774 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.356848 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.357057 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.357089 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.357115 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:07Z","lastTransitionTime":"2025-11-24T11:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.459274 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.459329 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.459343 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.459364 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.459381 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:07Z","lastTransitionTime":"2025-11-24T11:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.562084 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.562125 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.562137 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.562154 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.562165 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:07Z","lastTransitionTime":"2025-11-24T11:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.665330 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.665395 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.665439 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.665462 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.665491 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:07Z","lastTransitionTime":"2025-11-24T11:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.768749 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.768824 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.768843 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.768868 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.768885 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:07Z","lastTransitionTime":"2025-11-24T11:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.871090 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.871126 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.871141 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.871156 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.871168 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:07Z","lastTransitionTime":"2025-11-24T11:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.973757 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.973807 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.973816 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.973832 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:07 crc kubenswrapper[4789]: I1124 11:31:07.973843 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:07Z","lastTransitionTime":"2025-11-24T11:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.075625 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.075669 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.075678 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.075691 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.075699 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:08Z","lastTransitionTime":"2025-11-24T11:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.093269 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.093891 4789 scope.go:117] "RemoveContainer" containerID="955a3bd1c17a9abb17278636982b95e2af5da2d21aa9981776c102c57f0c1825" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.168330 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:31:08 crc kubenswrapper[4789]: E1124 11:31:08.168724 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.168419 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:31:08 crc kubenswrapper[4789]: E1124 11:31:08.168808 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.168521 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:31:08 crc kubenswrapper[4789]: E1124 11:31:08.168866 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.168384 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:31:08 crc kubenswrapper[4789]: E1124 11:31:08.168916 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.177866 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.177905 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.177916 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.177933 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.177945 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:08Z","lastTransitionTime":"2025-11-24T11:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.181339 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5fgg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"776a7cdb-6468-4e8a-8577-3535ff549781\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a9c256912e5f9308382925d83cd341ff711fdd9fce20f0c76d22f59033bfbf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ct4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5fgg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:08Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.192794 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c4a832-f0e4-481b-a474-3ecea86049f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb40689bf9e2d48e8dbd0827e82dc097464ab71edf0f871edc26ff8ed3508957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9czvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:08Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.209265 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc01f98a19f3885135cee8c8ee980f101ca61c40d316c0296bacfc3218400\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:08Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.220710 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:08Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.233885 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:08Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.243863 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422480a045454133a17132666976f8e5a564759ab1bf7668e41ad1663eb4bc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dce8b517d8f914c50b708fd7d66e6e3796768ded1a0bcb0c5f575f124844c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:08Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.254224 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b941dfb57d7894426efab65a2f2f6a0cbb524c48c0657d493eefe51923f30711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:08Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.269515 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:08Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.280416 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.280485 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.280499 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.281313 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.281327 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:08Z","lastTransitionTime":"2025-11-24T11:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.281477 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:08Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.298444 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6d361cd-fbb3-466d-9026-4c685922072f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955a3bd1c17a9abb17278636982b95e2af5da2d21aa9981776c102c57f0c1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://955a3bd1c17a9abb17278636982b95e2af5da2d21aa9981776c102c57f0c1825\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:30:55Z\\\",\\\"message\\\":\\\"rt:false}}\\\\nI1124 11:30:55.515431 6151 services_controller.go:444] Built service openshift-marketplace/redhat-marketplace LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1124 11:30:55.515437 6151 services_controller.go:445] Built service openshift-marketplace/redhat-marketplace LB template configs for network=default: []services.lbConfig(nil)\\\\nF1124 11:30:55.515512 6151 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:55Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:30:55.515503 6151 services_controller.go:451] Built service openshift-marketplace/redhat-marketplace cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", E\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n4hd6_openshift-ovn-kubernetes(c6d361cd-fbb3-466d-9026-4c685922072f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4hd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:08Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.307019 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vztqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da80bfe1-36b3-4239-bf6e-a855a490290a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17faecc8b835016ac0c8868de42de9b0990ce6399926e949f319fc4a26a3257b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nz8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vztqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:08Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.315046 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zthhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74a73ebd6641a79c50641db01a42eaf7842b9700926f302b4f5e938efa5d865f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpwcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zthhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:08Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.323775 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-s69rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1033d5e6-680c-4193-aade-8c3d801b0e3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h5sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h5sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-s69rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:08Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.334305 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:08Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.345156 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fcd7ef8bfab3cbd56ad3f1df7b1d8aaf1459411f27649c7cd12dcde866d14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bbbf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:08Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.354236 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jz2zx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c88057c-782b-4cc3-8243-828d959f4434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8b2f85ae9f76d8adf40a2018100916e9aace7877f1f10f26a147088cf44898d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmkqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b792d376da032b1887743c253b0109f14b255a30ef15032b261605d07de2f0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmkqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jz2zx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:08Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.382808 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.382842 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.382853 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.382870 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.382881 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:08Z","lastTransitionTime":"2025-11-24T11:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.485608 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.485641 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.485653 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.485668 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.485679 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:08Z","lastTransitionTime":"2025-11-24T11:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.506934 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4hd6_c6d361cd-fbb3-466d-9026-4c685922072f/ovnkube-controller/1.log" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.509448 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" event={"ID":"c6d361cd-fbb3-466d-9026-4c685922072f","Type":"ContainerStarted","Data":"f654e0567288af612581e353fc5033f6afb865f923ec49fa06ef0fff099d0bec"} Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.509891 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.526953 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5fgg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"776a7cdb-6468-4e8a-8577-3535ff549781\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a9c256912e5f9308382925d83cd341ff711fdd9fce20f0c76d22f59033bfbf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ct4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5fgg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:08Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.540081 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c4a832-f0e4-481b-a474-3ecea86049f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb40689bf9e2d48e8dbd0827e82dc097464ab71edf0f871edc26ff8ed3508957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9czvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:08Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.560693 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc01f98a19f3885135cee8c8ee980f101ca61c40d316c0296bacfc3218400\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:08Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.575987 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:08Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.587814 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:08Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.587994 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.588022 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.588032 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.588049 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.588060 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:08Z","lastTransitionTime":"2025-11-24T11:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.601533 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422480a045454133a17132666976f8e5a564759ab1bf7668e41ad1663eb4bc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dce8b517d8f914c50b708fd7d66e6e3796768ded1a0bcb0c5f575f124844c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:08Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.613777 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b941dfb57d7894426efab65a2f2f6a0cbb524c48c0657d493eefe51923f30711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:08Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.625242 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:08Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.638910 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:08Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.654634 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6d361cd-fbb3-466d-9026-4c685922072f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f654e0567288af612581e353fc5033f6afb865f923ec49fa06ef0fff099d0bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://955a3bd1c17a9abb17278636982b95e2af5da2d21aa9981776c102c57f0c1825\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:30:55Z\\\",\\\"message\\\":\\\"rt:false}}\\\\nI1124 11:30:55.515431 6151 services_controller.go:444] Built service openshift-marketplace/redhat-marketplace LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1124 11:30:55.515437 6151 services_controller.go:445] Built service openshift-marketplace/redhat-marketplace LB template configs for network=default: []services.lbConfig(nil)\\\\nF1124 11:30:55.515512 6151 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:55Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:30:55.515503 6151 services_controller.go:451] Built service openshift-marketplace/redhat-marketplace cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", E\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:31:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4hd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:08Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.662740 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vztqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da80bfe1-36b3-4239-bf6e-a855a490290a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17faecc8b835016ac0c8868de42de9b0990ce6399926e949f319fc4a26a3257b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nz8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vztqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:08Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.670061 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zthhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74a73ebd6641a79c50641db01a42eaf7842b9700926f302b4f5e938efa5d865f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpwcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zthhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:08Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.677666 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-s69rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1033d5e6-680c-4193-aade-8c3d801b0e3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h5sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h5sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-s69rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:08Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.687831 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:08Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.690540 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.690578 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.690586 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.690599 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.690609 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:08Z","lastTransitionTime":"2025-11-24T11:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.701061 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fcd7ef8bfab3cbd56ad3f1df7b1d8aaf1459411f27649c7cd12dcde866d14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bbbf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:08Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.711753 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jz2zx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c88057c-782b-4cc3-8243-828d959f4434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8b2f85ae9f76d8adf40a2018100916e9aace7877f1f10f26a147088cf44898d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmkqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b792d376da032b1887743c253b0109f14b255a30ef15032b261605d07de2f0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmkqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jz2zx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:08Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.793034 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.793074 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.793082 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.793095 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.793104 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:08Z","lastTransitionTime":"2025-11-24T11:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.895219 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.895259 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.895271 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.895310 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.895320 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:08Z","lastTransitionTime":"2025-11-24T11:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.997868 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.998079 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.998209 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.998292 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:08 crc kubenswrapper[4789]: I1124 11:31:08.998381 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:08Z","lastTransitionTime":"2025-11-24T11:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.100589 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.100645 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.100664 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.100683 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.100697 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:09Z","lastTransitionTime":"2025-11-24T11:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.203346 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.203606 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.203739 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.203842 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.203979 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:09Z","lastTransitionTime":"2025-11-24T11:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.306144 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.306190 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.306205 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.306226 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.306240 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:09Z","lastTransitionTime":"2025-11-24T11:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.408652 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.408702 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.408712 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.408728 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.408739 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:09Z","lastTransitionTime":"2025-11-24T11:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.511220 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.511276 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.511291 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.511311 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.511326 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:09Z","lastTransitionTime":"2025-11-24T11:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.513441 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4hd6_c6d361cd-fbb3-466d-9026-4c685922072f/ovnkube-controller/2.log" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.514216 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4hd6_c6d361cd-fbb3-466d-9026-4c685922072f/ovnkube-controller/1.log" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.517372 4789 generic.go:334] "Generic (PLEG): container finished" podID="c6d361cd-fbb3-466d-9026-4c685922072f" containerID="f654e0567288af612581e353fc5033f6afb865f923ec49fa06ef0fff099d0bec" exitCode=1 Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.517409 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" event={"ID":"c6d361cd-fbb3-466d-9026-4c685922072f","Type":"ContainerDied","Data":"f654e0567288af612581e353fc5033f6afb865f923ec49fa06ef0fff099d0bec"} Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.517449 4789 scope.go:117] "RemoveContainer" containerID="955a3bd1c17a9abb17278636982b95e2af5da2d21aa9981776c102c57f0c1825" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.518776 4789 scope.go:117] "RemoveContainer" containerID="f654e0567288af612581e353fc5033f6afb865f923ec49fa06ef0fff099d0bec" Nov 24 11:31:09 crc kubenswrapper[4789]: E1124 11:31:09.519609 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n4hd6_openshift-ovn-kubernetes(c6d361cd-fbb3-466d-9026-4c685922072f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.535102 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.553949 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fcd7ef8bfab3cbd56ad3f1df7b1d8aaf1459411f27649c7cd12dcde866d14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bbbf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.566514 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jz2zx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c88057c-782b-4cc3-8243-828d959f4434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8b2f85ae9f76d8adf40a2018100916e9aace7877f1f10f26a147088cf44898d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmkqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b792d376da032b1887743c253b0109f14b255a30ef15032b261605d07de2f0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmkqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jz2zx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.582400 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc01f98a19f3885135cee8c8ee980f101ca61c40d316c0296bacfc3218400\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.596421 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.610180 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.614121 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.614185 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.614194 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.614211 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.614222 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:09Z","lastTransitionTime":"2025-11-24T11:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.623326 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422480a045454133a17132666976f8e5a564759ab1bf7668e41ad1663eb4bc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dce8b517d8f914c50b708fd7d66e6e3796768ded1a0bcb0c5f575f124844c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.633420 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b941dfb57d7894426efab65a2f2f6a0cbb524c48c0657d493eefe51923f30711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.643600 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5fgg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"776a7cdb-6468-4e8a-8577-3535ff549781\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a9c256912e5f9308382925d83cd341ff711fdd9fce20f0c76d22f59033bfbf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ct4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5fgg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.654505 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c4a832-f0e4-481b-a474-3ecea86049f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb40689bf9e2d48e8dbd0827e82dc097464ab71edf0f871edc26ff8ed3508957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9czvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.667434 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.681043 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.701313 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6d361cd-fbb3-466d-9026-4c685922072f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f654e0567288af612581e353fc5033f6afb865f923ec49fa06ef0fff099d0bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://955a3bd1c17a9abb17278636982b95e2af5da2d21aa9981776c102c57f0c1825\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:30:55Z\\\",\\\"message\\\":\\\"rt:false}}\\\\nI1124 11:30:55.515431 6151 services_controller.go:444] Built service openshift-marketplace/redhat-marketplace LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1124 11:30:55.515437 6151 services_controller.go:445] Built service openshift-marketplace/redhat-marketplace LB template configs for network=default: []services.lbConfig(nil)\\\\nF1124 11:30:55.515512 6151 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:30:55Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:30:55.515503 6151 services_controller.go:451] Built service openshift-marketplace/redhat-marketplace cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", E\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f654e0567288af612581e353fc5033f6afb865f923ec49fa06ef0fff099d0bec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:31:08Z\\\",\\\"message\\\":\\\".go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:08Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:31:08.963815 6358 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-9czvn\\\\nI1124 11:31:08.963817 6358 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-zthhc\\\\nI1124 11:31:08.963820 6358 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-9czvn in node crc\\\\nI1124 11:31:08.963825 6358 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-9czvn after 0 failed atte\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:31:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4hd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.713129 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vztqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da80bfe1-36b3-4239-bf6e-a855a490290a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17faecc8b835016ac0c8868de42de9b0990ce6399926e949f319fc4a26a3257b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nz8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vztqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.720080 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.720158 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.720181 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.720212 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.720236 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:09Z","lastTransitionTime":"2025-11-24T11:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.725669 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zthhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74a73ebd6641a79c50641db01a42eaf7842b9700926f302b4f5e938efa5d865f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpwcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zthhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.738837 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-s69rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1033d5e6-680c-4193-aade-8c3d801b0e3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h5sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h5sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-s69rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.824934 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.824994 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.825011 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.825035 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.825045 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:09Z","lastTransitionTime":"2025-11-24T11:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.927624 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.927660 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.927668 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.927682 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.927696 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:09Z","lastTransitionTime":"2025-11-24T11:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:09 crc kubenswrapper[4789]: I1124 11:31:09.946987 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:31:09 crc kubenswrapper[4789]: E1124 11:31:09.951384 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:31:41.947818369 +0000 UTC m=+84.530289758 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.030925 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.030973 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.030988 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.031012 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.031025 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:10Z","lastTransitionTime":"2025-11-24T11:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.048756 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.048812 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.048860 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.048896 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:31:10 crc kubenswrapper[4789]: E1124 11:31:10.048962 4789 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:31:10 crc kubenswrapper[4789]: E1124 11:31:10.049040 4789 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:31:10 crc kubenswrapper[4789]: E1124 11:31:10.049081 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:31:42.049058291 +0000 UTC m=+84.631529670 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:31:10 crc kubenswrapper[4789]: E1124 11:31:10.049098 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:31:10 crc kubenswrapper[4789]: E1124 11:31:10.049126 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:31:10 crc kubenswrapper[4789]: E1124 11:31:10.049171 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:31:10 crc kubenswrapper[4789]: E1124 11:31:10.049198 4789 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:31:10 crc kubenswrapper[4789]: E1124 11:31:10.049140 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:31:10 crc kubenswrapper[4789]: E1124 11:31:10.049255 4789 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:31:10 crc kubenswrapper[4789]: E1124 11:31:10.049106 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:31:42.049096823 +0000 UTC m=+84.631568192 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:31:10 crc kubenswrapper[4789]: E1124 11:31:10.049336 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 11:31:42.049310618 +0000 UTC m=+84.631781997 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:31:10 crc kubenswrapper[4789]: E1124 11:31:10.049357 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 11:31:42.049348659 +0000 UTC m=+84.631820038 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.133346 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.133394 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.133410 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.133430 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.133446 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:10Z","lastTransitionTime":"2025-11-24T11:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.168194 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.168304 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:31:10 crc kubenswrapper[4789]: E1124 11:31:10.168446 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.168515 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.168529 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:31:10 crc kubenswrapper[4789]: E1124 11:31:10.168649 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:31:10 crc kubenswrapper[4789]: E1124 11:31:10.168747 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:31:10 crc kubenswrapper[4789]: E1124 11:31:10.168810 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.236248 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.236317 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.236335 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.236360 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.236378 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:10Z","lastTransitionTime":"2025-11-24T11:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.339758 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.339839 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.339866 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.339901 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.339924 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:10Z","lastTransitionTime":"2025-11-24T11:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.442141 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.442205 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.442221 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.442247 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.442264 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:10Z","lastTransitionTime":"2025-11-24T11:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.523381 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4hd6_c6d361cd-fbb3-466d-9026-4c685922072f/ovnkube-controller/2.log" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.527045 4789 scope.go:117] "RemoveContainer" containerID="f654e0567288af612581e353fc5033f6afb865f923ec49fa06ef0fff099d0bec" Nov 24 11:31:10 crc kubenswrapper[4789]: E1124 11:31:10.527182 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n4hd6_openshift-ovn-kubernetes(c6d361cd-fbb3-466d-9026-4c685922072f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.543064 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.544364 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.544410 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.544421 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.544438 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.544452 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:10Z","lastTransitionTime":"2025-11-24T11:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.556035 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-s69rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1033d5e6-680c-4193-aade-8c3d801b0e3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h5sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h5sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-s69rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.574640 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.599488 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6d361cd-fbb3-466d-9026-4c685922072f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f654e0567288af612581e353fc5033f6afb865f923ec49fa06ef0fff099d0bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f654e0567288af612581e353fc5033f6afb865f923ec49fa06ef0fff099d0bec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:31:08Z\\\",\\\"message\\\":\\\".go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:08Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:31:08.963815 6358 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-9czvn\\\\nI1124 11:31:08.963817 6358 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-zthhc\\\\nI1124 11:31:08.963820 6358 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-9czvn in node crc\\\\nI1124 11:31:08.963825 6358 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-9czvn after 0 failed atte\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:31:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n4hd6_openshift-ovn-kubernetes(c6d361cd-fbb3-466d-9026-4c685922072f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4hd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.614364 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vztqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da80bfe1-36b3-4239-bf6e-a855a490290a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17faecc8b835016ac0c8868de42de9b0990ce6399926e949f319fc4a26a3257b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nz8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vztqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.621321 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.629257 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zthhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74a73ebd6641a79c50641db01a42eaf7842b9700926f302b4f5e938efa5d865f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpwcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zthhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.646997 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.647034 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.647045 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.647065 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.647077 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:10Z","lastTransitionTime":"2025-11-24T11:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.650967 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.666804 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fcd7ef8bfab3cbd56ad3f1df7b1d8aaf1459411f27649c7cd12dcde866d14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bbbf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.679569 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jz2zx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c88057c-782b-4cc3-8243-828d959f4434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8b2f85ae9f76d8adf40a2018100916e9aace7877f1f10f26a147088cf44898d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmkqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b792d376da032b1887743c253b0109f14b255a30ef15032b261605d07de2f0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmkqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jz2zx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.691387 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b941dfb57d7894426efab65a2f2f6a0cbb524c48c0657d493eefe51923f30711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.705215 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5fgg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"776a7cdb-6468-4e8a-8577-3535ff549781\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a9c256912e5f9308382925d83cd341ff711fdd9fce20f0c76d22f59033bfbf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ct4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5fgg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.718048 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c4a832-f0e4-481b-a474-3ecea86049f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb40689bf9e2d48e8dbd0827e82dc097464ab71edf0f871edc26ff8ed3508957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9czvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.734847 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc01f98a19f3885135cee8c8ee980f101ca61c40d316c0296bacfc3218400\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.750187 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.750255 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.750267 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.750287 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.750303 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:10Z","lastTransitionTime":"2025-11-24T11:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.755441 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.773418 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.791139 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422480a045454133a17132666976f8e5a564759ab1bf7668e41ad1663eb4bc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dce8b517d8f914c50b708fd7d66e6e3796768ded1a0bcb0c5f575f124844c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.809481 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.831812 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.853772 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.853822 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.853836 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.853855 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.853869 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:10Z","lastTransitionTime":"2025-11-24T11:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.861887 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6d361cd-fbb3-466d-9026-4c685922072f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f654e0567288af612581e353fc5033f6afb865f923ec49fa06ef0fff099d0bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f654e0567288af612581e353fc5033f6afb865f923ec49fa06ef0fff099d0bec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:31:08Z\\\",\\\"message\\\":\\\".go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:08Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:31:08.963815 6358 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-9czvn\\\\nI1124 11:31:08.963817 6358 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-zthhc\\\\nI1124 11:31:08.963820 6358 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-9czvn in node crc\\\\nI1124 11:31:08.963825 6358 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-9czvn after 0 failed atte\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:31:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n4hd6_openshift-ovn-kubernetes(c6d361cd-fbb3-466d-9026-4c685922072f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4hd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.876437 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vztqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da80bfe1-36b3-4239-bf6e-a855a490290a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17faecc8b835016ac0c8868de42de9b0990ce6399926e949f319fc4a26a3257b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nz8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vztqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.886017 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zthhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74a73ebd6641a79c50641db01a42eaf7842b9700926f302b4f5e938efa5d865f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpwcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zthhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.897034 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-s69rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1033d5e6-680c-4193-aade-8c3d801b0e3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h5sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h5sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-s69rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.910183 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.932124 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fcd7ef8bfab3cbd56ad3f1df7b1d8aaf1459411f27649c7cd12dcde866d14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bbbf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.942283 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jz2zx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c88057c-782b-4cc3-8243-828d959f4434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8b2f85ae9f76d8adf40a2018100916e9aace7877f1f10f26a147088cf44898d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmkqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b792d376da032b1887743c253b0109f14b255a30ef15032b261605d07de2f0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmkqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jz2zx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.954736 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5fgg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"776a7cdb-6468-4e8a-8577-3535ff549781\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a9c256912e5f9308382925d83cd341ff711fdd9fce20f0c76d22f59033bfbf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ct4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5fgg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.956158 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.956188 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.956199 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.956214 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.956225 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:10Z","lastTransitionTime":"2025-11-24T11:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.968253 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c4a832-f0e4-481b-a474-3ecea86049f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb40689bf9e2d48e8dbd0827e82dc097464ab71edf0f871edc26ff8ed3508957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9czvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.984015 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc01f98a19f3885135cee8c8ee980f101ca61c40d316c0296bacfc3218400\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:10 crc kubenswrapper[4789]: I1124 11:31:10.999062 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.058203 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.058246 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.058257 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.058273 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.058284 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:11Z","lastTransitionTime":"2025-11-24T11:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.066759 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:11Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.079546 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422480a045454133a17132666976f8e5a564759ab1bf7668e41ad1663eb4bc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dce8b517d8f914c50b708fd7d66e6e3796768ded1a0bcb0c5f575f124844c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:11Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.092570 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b941dfb57d7894426efab65a2f2f6a0cbb524c48c0657d493eefe51923f30711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:11Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.160891 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.160926 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.160937 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.160955 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.160966 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:11Z","lastTransitionTime":"2025-11-24T11:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.264314 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.264368 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.264384 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.264404 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.264419 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:11Z","lastTransitionTime":"2025-11-24T11:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.368134 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.368172 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.368182 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.368197 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.368225 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:11Z","lastTransitionTime":"2025-11-24T11:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.470994 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.471021 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.471030 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.471042 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.471050 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:11Z","lastTransitionTime":"2025-11-24T11:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.573773 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.573876 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.573899 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.573927 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.573949 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:11Z","lastTransitionTime":"2025-11-24T11:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.677013 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.677067 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.677084 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.677105 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.677122 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:11Z","lastTransitionTime":"2025-11-24T11:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.780809 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.780883 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.780909 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.780937 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.780960 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:11Z","lastTransitionTime":"2025-11-24T11:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.884171 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.884240 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.884257 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.884278 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.884294 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:11Z","lastTransitionTime":"2025-11-24T11:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.987782 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.987831 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.987849 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.987871 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:11 crc kubenswrapper[4789]: I1124 11:31:11.987888 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:11Z","lastTransitionTime":"2025-11-24T11:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.091200 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.091297 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.091367 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.091584 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.091734 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:12Z","lastTransitionTime":"2025-11-24T11:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.169016 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.169048 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:31:12 crc kubenswrapper[4789]: E1124 11:31:12.169164 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.169375 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:31:12 crc kubenswrapper[4789]: E1124 11:31:12.169491 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.169865 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:31:12 crc kubenswrapper[4789]: E1124 11:31:12.169952 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:31:12 crc kubenswrapper[4789]: E1124 11:31:12.170029 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.193955 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.193995 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.194005 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.194019 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.194029 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:12Z","lastTransitionTime":"2025-11-24T11:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.297055 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.297131 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.297154 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.297182 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.297202 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:12Z","lastTransitionTime":"2025-11-24T11:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.401442 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.401540 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.401559 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.401584 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.401601 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:12Z","lastTransitionTime":"2025-11-24T11:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.504767 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.504808 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.504816 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.504831 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.504839 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:12Z","lastTransitionTime":"2025-11-24T11:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.606887 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.606926 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.606934 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.606947 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.606955 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:12Z","lastTransitionTime":"2025-11-24T11:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.710196 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.710237 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.710246 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.710259 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.710269 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:12Z","lastTransitionTime":"2025-11-24T11:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.813291 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.813327 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.813335 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.813348 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.813356 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:12Z","lastTransitionTime":"2025-11-24T11:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.915563 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.915622 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.915639 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.915663 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:12 crc kubenswrapper[4789]: I1124 11:31:12.915680 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:12Z","lastTransitionTime":"2025-11-24T11:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.017808 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.017857 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.017869 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.017888 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.017900 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:13Z","lastTransitionTime":"2025-11-24T11:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.120914 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.120981 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.121017 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.121035 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.121047 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:13Z","lastTransitionTime":"2025-11-24T11:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.223921 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.223992 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.224014 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.224048 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.224070 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:13Z","lastTransitionTime":"2025-11-24T11:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.326946 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.327348 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.327583 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.327784 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.327960 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:13Z","lastTransitionTime":"2025-11-24T11:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.431590 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.431688 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.431713 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.431741 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.431762 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:13Z","lastTransitionTime":"2025-11-24T11:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.534605 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.534666 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.534755 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.534783 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.534801 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:13Z","lastTransitionTime":"2025-11-24T11:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.617327 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.633224 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.638030 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.638076 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.638093 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.638114 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.638130 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:13Z","lastTransitionTime":"2025-11-24T11:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.638731 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.653571 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422480a045454133a17132666976f8e5a564759ab1bf7668e41ad1663eb4bc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dce8b517d8f914c50b708fd7d66e6e3796768ded1a0bcb0c5f575f124844c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.667843 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b941dfb57d7894426efab65a2f2f6a0cbb524c48c0657d493eefe51923f30711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.684860 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5fgg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"776a7cdb-6468-4e8a-8577-3535ff549781\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a9c256912e5f9308382925d83cd341ff711fdd9fce20f0c76d22f59033bfbf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ct4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5fgg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.697093 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c4a832-f0e4-481b-a474-3ecea86049f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb40689bf9e2d48e8dbd0827e82dc097464ab71edf0f871edc26ff8ed3508957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9czvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.713018 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc01f98a19f3885135cee8c8ee980f101ca61c40d316c0296bacfc3218400\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.725726 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.741207 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.741282 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.741295 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.741310 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.741319 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:13Z","lastTransitionTime":"2025-11-24T11:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.741796 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.756009 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vztqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da80bfe1-36b3-4239-bf6e-a855a490290a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17faecc8b835016ac0c8868de42de9b0990ce6399926e949f319fc4a26a3257b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nz8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vztqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.768749 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zthhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74a73ebd6641a79c50641db01a42eaf7842b9700926f302b4f5e938efa5d865f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpwcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zthhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.781991 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-s69rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1033d5e6-680c-4193-aade-8c3d801b0e3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h5sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h5sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-s69rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.794130 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.817230 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6d361cd-fbb3-466d-9026-4c685922072f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f654e0567288af612581e353fc5033f6afb865f923ec49fa06ef0fff099d0bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f654e0567288af612581e353fc5033f6afb865f923ec49fa06ef0fff099d0bec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:31:08Z\\\",\\\"message\\\":\\\".go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:08Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:31:08.963815 6358 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-9czvn\\\\nI1124 11:31:08.963817 6358 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-zthhc\\\\nI1124 11:31:08.963820 6358 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-9czvn in node crc\\\\nI1124 11:31:08.963825 6358 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-9czvn after 0 failed atte\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:31:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n4hd6_openshift-ovn-kubernetes(c6d361cd-fbb3-466d-9026-4c685922072f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4hd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.830638 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jz2zx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c88057c-782b-4cc3-8243-828d959f4434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8b2f85ae9f76d8adf40a2018100916e9aace7877f1f10f26a147088cf44898d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmkqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b792d376da032b1887743c253b0109f14b255a30ef15032b261605d07de2f0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmkqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jz2zx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.844161 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.844221 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.844237 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.844261 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.844278 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:13Z","lastTransitionTime":"2025-11-24T11:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.847755 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.865929 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fcd7ef8bfab3cbd56ad3f1df7b1d8aaf1459411f27649c7cd12dcde866d14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bbbf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:13Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.947404 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.947494 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.947521 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.947548 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:13 crc kubenswrapper[4789]: I1124 11:31:13.947565 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:13Z","lastTransitionTime":"2025-11-24T11:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.050697 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.051086 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.051297 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.051527 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.051704 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:14Z","lastTransitionTime":"2025-11-24T11:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.154400 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.154516 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.154536 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.154560 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.154581 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:14Z","lastTransitionTime":"2025-11-24T11:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.168674 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.168728 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.168764 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:31:14 crc kubenswrapper[4789]: E1124 11:31:14.168825 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.168886 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:31:14 crc kubenswrapper[4789]: E1124 11:31:14.169033 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:31:14 crc kubenswrapper[4789]: E1124 11:31:14.169089 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:31:14 crc kubenswrapper[4789]: E1124 11:31:14.169164 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.257409 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.257444 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.257453 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.257491 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.257502 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:14Z","lastTransitionTime":"2025-11-24T11:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.293937 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1033d5e6-680c-4193-aade-8c3d801b0e3f-metrics-certs\") pod \"network-metrics-daemon-s69rz\" (UID: \"1033d5e6-680c-4193-aade-8c3d801b0e3f\") " pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:31:14 crc kubenswrapper[4789]: E1124 11:31:14.294037 4789 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:31:14 crc kubenswrapper[4789]: E1124 11:31:14.294093 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1033d5e6-680c-4193-aade-8c3d801b0e3f-metrics-certs podName:1033d5e6-680c-4193-aade-8c3d801b0e3f nodeName:}" failed. No retries permitted until 2025-11-24 11:31:30.294078567 +0000 UTC m=+72.876549946 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1033d5e6-680c-4193-aade-8c3d801b0e3f-metrics-certs") pod "network-metrics-daemon-s69rz" (UID: "1033d5e6-680c-4193-aade-8c3d801b0e3f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.359574 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.359609 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.359619 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.359657 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.359675 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:14Z","lastTransitionTime":"2025-11-24T11:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.462096 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.462131 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.462142 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.462157 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.462167 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:14Z","lastTransitionTime":"2025-11-24T11:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.564838 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.564906 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.564926 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.564951 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.564972 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:14Z","lastTransitionTime":"2025-11-24T11:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.667359 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.667425 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.667438 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.667453 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.667475 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:14Z","lastTransitionTime":"2025-11-24T11:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.769404 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.769998 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.770098 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.770207 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.770277 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:14Z","lastTransitionTime":"2025-11-24T11:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.789402 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.789454 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.789488 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.789507 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.789519 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:14Z","lastTransitionTime":"2025-11-24T11:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:14 crc kubenswrapper[4789]: E1124 11:31:14.801947 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4376b485-9285-482b-9f4e-acdea532ff82\\\",\\\"systemUUID\\\":\\\"48941845-60e3-4de0-ba49-51eec51285bb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.805078 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.805107 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.805119 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.805133 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.805144 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:14Z","lastTransitionTime":"2025-11-24T11:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:14 crc kubenswrapper[4789]: E1124 11:31:14.817221 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4376b485-9285-482b-9f4e-acdea532ff82\\\",\\\"systemUUID\\\":\\\"48941845-60e3-4de0-ba49-51eec51285bb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.820979 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.821073 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.821132 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.821282 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.821370 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:14Z","lastTransitionTime":"2025-11-24T11:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:14 crc kubenswrapper[4789]: E1124 11:31:14.834991 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4376b485-9285-482b-9f4e-acdea532ff82\\\",\\\"systemUUID\\\":\\\"48941845-60e3-4de0-ba49-51eec51285bb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.839728 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.839795 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.839809 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.839827 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.839840 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:14Z","lastTransitionTime":"2025-11-24T11:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:14 crc kubenswrapper[4789]: E1124 11:31:14.855406 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4376b485-9285-482b-9f4e-acdea532ff82\\\",\\\"systemUUID\\\":\\\"48941845-60e3-4de0-ba49-51eec51285bb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.859397 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.859478 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.859499 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.859519 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.859532 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:14Z","lastTransitionTime":"2025-11-24T11:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:14 crc kubenswrapper[4789]: E1124 11:31:14.883343 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4376b485-9285-482b-9f4e-acdea532ff82\\\",\\\"systemUUID\\\":\\\"48941845-60e3-4de0-ba49-51eec51285bb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:14Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:14 crc kubenswrapper[4789]: E1124 11:31:14.883535 4789 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.884822 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.884848 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.884857 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.884871 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.884881 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:14Z","lastTransitionTime":"2025-11-24T11:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.987617 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.987660 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.987675 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.987690 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:14 crc kubenswrapper[4789]: I1124 11:31:14.987701 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:14Z","lastTransitionTime":"2025-11-24T11:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.090141 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.090186 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.090199 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.090219 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.090232 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:15Z","lastTransitionTime":"2025-11-24T11:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.193004 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.193050 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.193064 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.193084 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.193097 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:15Z","lastTransitionTime":"2025-11-24T11:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.295811 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.295857 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.295872 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.295890 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.295903 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:15Z","lastTransitionTime":"2025-11-24T11:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.398517 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.398574 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.398596 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.398619 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.398636 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:15Z","lastTransitionTime":"2025-11-24T11:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.501551 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.501624 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.501648 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.501675 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.501696 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:15Z","lastTransitionTime":"2025-11-24T11:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.605090 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.605142 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.605160 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.605183 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.605200 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:15Z","lastTransitionTime":"2025-11-24T11:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.708339 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.708381 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.708392 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.708409 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.708437 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:15Z","lastTransitionTime":"2025-11-24T11:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.811752 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.812048 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.812057 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.812070 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.812080 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:15Z","lastTransitionTime":"2025-11-24T11:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.914904 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.914954 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.914967 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.914984 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:15 crc kubenswrapper[4789]: I1124 11:31:15.914995 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:15Z","lastTransitionTime":"2025-11-24T11:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.019094 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.019153 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.019172 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.019192 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.019208 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:16Z","lastTransitionTime":"2025-11-24T11:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.122154 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.122259 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.122283 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.122312 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.122338 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:16Z","lastTransitionTime":"2025-11-24T11:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.168900 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.169159 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.168910 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.168900 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:31:16 crc kubenswrapper[4789]: E1124 11:31:16.169331 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:31:16 crc kubenswrapper[4789]: E1124 11:31:16.169277 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:31:16 crc kubenswrapper[4789]: E1124 11:31:16.169450 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:31:16 crc kubenswrapper[4789]: E1124 11:31:16.169652 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.225971 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.226027 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.226044 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.226070 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.226105 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:16Z","lastTransitionTime":"2025-11-24T11:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.328858 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.328929 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.328953 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.328983 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.329004 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:16Z","lastTransitionTime":"2025-11-24T11:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.431513 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.431606 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.431923 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.431991 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.432009 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:16Z","lastTransitionTime":"2025-11-24T11:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.535036 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.535089 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.535105 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.535127 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.535144 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:16Z","lastTransitionTime":"2025-11-24T11:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.638819 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.638886 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.638904 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.638929 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.638948 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:16Z","lastTransitionTime":"2025-11-24T11:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.741336 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.741364 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.741372 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.741383 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.741392 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:16Z","lastTransitionTime":"2025-11-24T11:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.843725 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.843790 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.843813 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.843841 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.843864 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:16Z","lastTransitionTime":"2025-11-24T11:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.947795 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.947853 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.947876 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.947905 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:16 crc kubenswrapper[4789]: I1124 11:31:16.947926 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:16Z","lastTransitionTime":"2025-11-24T11:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.051399 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.051533 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.051559 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.051669 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.051746 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:17Z","lastTransitionTime":"2025-11-24T11:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.154676 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.154758 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.154779 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.154805 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.154823 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:17Z","lastTransitionTime":"2025-11-24T11:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.258337 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.258397 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.258415 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.258437 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.258454 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:17Z","lastTransitionTime":"2025-11-24T11:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.361575 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.361642 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.361665 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.361698 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.361720 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:17Z","lastTransitionTime":"2025-11-24T11:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.466590 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.466822 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.466852 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.466930 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.466959 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:17Z","lastTransitionTime":"2025-11-24T11:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.571013 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.571094 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.571116 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.571145 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.571169 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:17Z","lastTransitionTime":"2025-11-24T11:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.673694 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.673751 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.673768 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.673793 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.673810 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:17Z","lastTransitionTime":"2025-11-24T11:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.777192 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.777253 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.777270 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.777293 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.777309 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:17Z","lastTransitionTime":"2025-11-24T11:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.880840 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.880957 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.880982 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.881052 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.881076 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:17Z","lastTransitionTime":"2025-11-24T11:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.984696 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.984754 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.984771 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.984794 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:17 crc kubenswrapper[4789]: I1124 11:31:17.984811 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:17Z","lastTransitionTime":"2025-11-24T11:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.088541 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.088668 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.088698 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.088782 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.088810 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:18Z","lastTransitionTime":"2025-11-24T11:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.168803 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.168908 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.168989 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:31:18 crc kubenswrapper[4789]: E1124 11:31:18.169520 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:31:18 crc kubenswrapper[4789]: E1124 11:31:18.169593 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:31:18 crc kubenswrapper[4789]: E1124 11:31:18.169744 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.172339 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:31:18 crc kubenswrapper[4789]: E1124 11:31:18.173098 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.194369 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:18Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.194767 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.194802 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.194819 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.194842 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.194859 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:18Z","lastTransitionTime":"2025-11-24T11:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.217710 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fcd7ef8bfab3cbd56ad3f1df7b1d8aaf1459411f27649c7cd12dcde866d14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bbbf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:18Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.235915 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jz2zx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c88057c-782b-4cc3-8243-828d959f4434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8b2f85ae9f76d8adf40a2018100916e9aace7877f1f10f26a147088cf44898d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmkqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b792d376da032b1887743c253b0109f14b255a30ef15032b261605d07de2f0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmkqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jz2zx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:18Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.260073 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422480a045454133a17132666976f8e5a564759ab1bf7668e41ad1663eb4bc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dce8b517d8f914c50b708fd7d66e6e3796768ded1a0bcb0c5f575f124844c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:18Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.275763 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b941dfb57d7894426efab65a2f2f6a0cbb524c48c0657d493eefe51923f30711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:18Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.290216 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5fgg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"776a7cdb-6468-4e8a-8577-3535ff549781\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a9c256912e5f9308382925d83cd341ff711fdd9fce20f0c76d22f59033bfbf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ct4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5fgg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:18Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.297365 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.297419 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.297437 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.297508 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.297527 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:18Z","lastTransitionTime":"2025-11-24T11:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.302182 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c4a832-f0e4-481b-a474-3ecea86049f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb40689bf9e2d48e8dbd0827e82dc097464ab71edf0f871edc26ff8ed3508957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9czvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:18Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.317418 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc01f98a19f3885135cee8c8ee980f101ca61c40d316c0296bacfc3218400\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:18Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.329697 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:18Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.343074 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:18Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.354699 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:18Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.363522 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zthhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74a73ebd6641a79c50641db01a42eaf7842b9700926f302b4f5e938efa5d865f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpwcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zthhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:18Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.372335 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-s69rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1033d5e6-680c-4193-aade-8c3d801b0e3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h5sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h5sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-s69rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:18Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.382368 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719b0731-cabf-4883-bd19-bbe3786b4ac3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb0303ba3fd943ad92e8cffb4d8322537a9115a81f2d714c22eed182bc8a90a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d011633bdece1cc331c96ab10bafee76ec769fdad2e60b09b2224ad3cf655395\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8847f098f36612e1b18e6fa7e9d3ecd32ae6a0aef704d6ed7e06f9115d993bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136df7849a013cb5393a500a40fcbe252deae349ad3c0d1dbc4f7926c01ff528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136df7849a013cb5393a500a40fcbe252deae349ad3c0d1dbc4f7926c01ff528\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:18Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.393974 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:18Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.399855 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.399909 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.399919 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.399932 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.399942 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:18Z","lastTransitionTime":"2025-11-24T11:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.411144 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6d361cd-fbb3-466d-9026-4c685922072f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f654e0567288af612581e353fc5033f6afb865f923ec49fa06ef0fff099d0bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f654e0567288af612581e353fc5033f6afb865f923ec49fa06ef0fff099d0bec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:31:08Z\\\",\\\"message\\\":\\\".go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:08Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:31:08.963815 6358 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-9czvn\\\\nI1124 11:31:08.963817 6358 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-zthhc\\\\nI1124 11:31:08.963820 6358 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-9czvn in node crc\\\\nI1124 11:31:08.963825 6358 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-9czvn after 0 failed atte\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:31:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n4hd6_openshift-ovn-kubernetes(c6d361cd-fbb3-466d-9026-4c685922072f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4hd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:18Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.419389 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vztqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da80bfe1-36b3-4239-bf6e-a855a490290a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17faecc8b835016ac0c8868de42de9b0990ce6399926e949f319fc4a26a3257b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nz8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vztqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:18Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.503116 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.503218 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.503229 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.503243 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.503252 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:18Z","lastTransitionTime":"2025-11-24T11:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.606071 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.606142 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.606160 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.606182 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.606202 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:18Z","lastTransitionTime":"2025-11-24T11:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.709039 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.709120 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.709144 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.709177 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.709200 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:18Z","lastTransitionTime":"2025-11-24T11:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.811767 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.811830 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.811848 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.811873 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.811891 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:18Z","lastTransitionTime":"2025-11-24T11:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.914244 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.914322 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.914344 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.914372 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:18 crc kubenswrapper[4789]: I1124 11:31:18.914393 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:18Z","lastTransitionTime":"2025-11-24T11:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.017113 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.017161 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.017177 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.017195 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.017209 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:19Z","lastTransitionTime":"2025-11-24T11:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.120602 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.120664 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.120676 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.120695 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.120707 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:19Z","lastTransitionTime":"2025-11-24T11:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.224167 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.224236 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.224262 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.224295 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.224320 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:19Z","lastTransitionTime":"2025-11-24T11:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.327317 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.327385 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.327412 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.327445 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.327511 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:19Z","lastTransitionTime":"2025-11-24T11:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.430021 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.430081 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.430099 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.430124 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.430143 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:19Z","lastTransitionTime":"2025-11-24T11:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.532930 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.532990 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.533009 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.533031 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.533048 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:19Z","lastTransitionTime":"2025-11-24T11:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.635730 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.635780 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.635790 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.635806 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.635817 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:19Z","lastTransitionTime":"2025-11-24T11:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.738281 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.738344 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.738356 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.738377 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.738389 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:19Z","lastTransitionTime":"2025-11-24T11:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.841568 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.841613 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.841625 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.841641 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.841653 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:19Z","lastTransitionTime":"2025-11-24T11:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.944205 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.944251 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.944262 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.944280 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:19 crc kubenswrapper[4789]: I1124 11:31:19.944294 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:19Z","lastTransitionTime":"2025-11-24T11:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.046867 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.046913 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.046925 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.046941 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.046952 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:20Z","lastTransitionTime":"2025-11-24T11:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.151575 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.151635 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.151661 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.151711 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.151730 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:20Z","lastTransitionTime":"2025-11-24T11:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.168826 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.168897 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:31:20 crc kubenswrapper[4789]: E1124 11:31:20.168945 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.168973 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:31:20 crc kubenswrapper[4789]: E1124 11:31:20.169055 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.169195 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:31:20 crc kubenswrapper[4789]: E1124 11:31:20.169250 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:31:20 crc kubenswrapper[4789]: E1124 11:31:20.169360 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.255314 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.255345 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.255354 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.255366 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.255375 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:20Z","lastTransitionTime":"2025-11-24T11:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.357729 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.357795 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.357817 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.357847 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.357867 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:20Z","lastTransitionTime":"2025-11-24T11:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.460618 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.460684 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.460701 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.460725 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.460742 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:20Z","lastTransitionTime":"2025-11-24T11:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.563235 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.563287 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.563302 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.563323 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.563339 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:20Z","lastTransitionTime":"2025-11-24T11:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.666588 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.666640 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.666653 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.666671 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.666684 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:20Z","lastTransitionTime":"2025-11-24T11:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.769920 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.769968 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.769989 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.770010 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.770025 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:20Z","lastTransitionTime":"2025-11-24T11:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.874050 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.874124 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.874145 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.874168 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.874184 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:20Z","lastTransitionTime":"2025-11-24T11:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.980928 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.980973 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.980985 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.981000 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:20 crc kubenswrapper[4789]: I1124 11:31:20.981014 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:20Z","lastTransitionTime":"2025-11-24T11:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.083794 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.083879 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.083892 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.083911 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.083924 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:21Z","lastTransitionTime":"2025-11-24T11:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.187271 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.187332 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.187350 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.187375 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.187393 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:21Z","lastTransitionTime":"2025-11-24T11:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.289660 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.289691 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.289699 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.289711 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.289720 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:21Z","lastTransitionTime":"2025-11-24T11:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.392145 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.392187 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.392199 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.392213 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.392223 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:21Z","lastTransitionTime":"2025-11-24T11:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.493918 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.493968 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.493984 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.494004 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.494019 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:21Z","lastTransitionTime":"2025-11-24T11:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.596840 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.596881 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.596893 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.596907 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.596917 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:21Z","lastTransitionTime":"2025-11-24T11:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.699760 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.699792 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.699803 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.699820 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.699830 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:21Z","lastTransitionTime":"2025-11-24T11:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.802384 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.802420 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.802428 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.802441 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.802449 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:21Z","lastTransitionTime":"2025-11-24T11:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.906094 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.906398 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.906503 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.906595 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:21 crc kubenswrapper[4789]: I1124 11:31:21.906674 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:21Z","lastTransitionTime":"2025-11-24T11:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.009253 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.009332 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.009357 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.009386 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.009409 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:22Z","lastTransitionTime":"2025-11-24T11:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.110971 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.111006 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.111015 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.111026 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.111036 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:22Z","lastTransitionTime":"2025-11-24T11:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.168701 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.168787 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.169078 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:31:22 crc kubenswrapper[4789]: E1124 11:31:22.169297 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.169398 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:31:22 crc kubenswrapper[4789]: E1124 11:31:22.169839 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:31:22 crc kubenswrapper[4789]: E1124 11:31:22.169964 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:31:22 crc kubenswrapper[4789]: E1124 11:31:22.170047 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.170592 4789 scope.go:117] "RemoveContainer" containerID="f654e0567288af612581e353fc5033f6afb865f923ec49fa06ef0fff099d0bec" Nov 24 11:31:22 crc kubenswrapper[4789]: E1124 11:31:22.170851 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n4hd6_openshift-ovn-kubernetes(c6d361cd-fbb3-466d-9026-4c685922072f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.214084 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.214140 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.214156 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.214179 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.214196 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:22Z","lastTransitionTime":"2025-11-24T11:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.317261 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.317517 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.317532 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.317548 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.317560 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:22Z","lastTransitionTime":"2025-11-24T11:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.419911 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.419952 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.419971 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.419987 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.419996 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:22Z","lastTransitionTime":"2025-11-24T11:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.522423 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.522480 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.522492 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.522510 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.522519 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:22Z","lastTransitionTime":"2025-11-24T11:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.625097 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.625127 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.625137 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.625151 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.625160 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:22Z","lastTransitionTime":"2025-11-24T11:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.727798 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.727841 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.727851 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.727868 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.727878 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:22Z","lastTransitionTime":"2025-11-24T11:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.830153 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.830190 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.830202 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.830216 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.830226 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:22Z","lastTransitionTime":"2025-11-24T11:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.933173 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.933225 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.933245 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.933268 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:22 crc kubenswrapper[4789]: I1124 11:31:22.933284 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:22Z","lastTransitionTime":"2025-11-24T11:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.035837 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.035876 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.035887 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.035901 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.035916 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:23Z","lastTransitionTime":"2025-11-24T11:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.138000 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.138046 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.138057 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.138071 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.138082 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:23Z","lastTransitionTime":"2025-11-24T11:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.240685 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.240747 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.240766 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.240791 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.240812 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:23Z","lastTransitionTime":"2025-11-24T11:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.343966 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.344281 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.344425 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.344619 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.344735 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:23Z","lastTransitionTime":"2025-11-24T11:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.446416 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.446664 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.446747 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.446837 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.446923 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:23Z","lastTransitionTime":"2025-11-24T11:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.548877 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.549090 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.549183 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.549289 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.549387 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:23Z","lastTransitionTime":"2025-11-24T11:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.652071 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.652138 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.652154 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.652176 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.652189 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:23Z","lastTransitionTime":"2025-11-24T11:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.754053 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.754081 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.754091 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.754103 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.754112 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:23Z","lastTransitionTime":"2025-11-24T11:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.855848 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.855884 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.855894 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.855906 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.855914 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:23Z","lastTransitionTime":"2025-11-24T11:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.958122 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.958527 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.958613 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.958693 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:23 crc kubenswrapper[4789]: I1124 11:31:23.958774 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:23Z","lastTransitionTime":"2025-11-24T11:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.061919 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.061963 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.061973 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.061988 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.062000 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:24Z","lastTransitionTime":"2025-11-24T11:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.163913 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.163946 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.163956 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.163968 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.163977 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:24Z","lastTransitionTime":"2025-11-24T11:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.168250 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.168277 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:31:24 crc kubenswrapper[4789]: E1124 11:31:24.168353 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.168522 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:31:24 crc kubenswrapper[4789]: E1124 11:31:24.168600 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:31:24 crc kubenswrapper[4789]: E1124 11:31:24.168636 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.168809 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:31:24 crc kubenswrapper[4789]: E1124 11:31:24.169065 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.265989 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.266039 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.266051 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.266072 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.266085 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:24Z","lastTransitionTime":"2025-11-24T11:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.368077 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.368123 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.368138 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.368158 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.368173 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:24Z","lastTransitionTime":"2025-11-24T11:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.470722 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.471005 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.471086 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.471172 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.471270 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:24Z","lastTransitionTime":"2025-11-24T11:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.573042 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.573417 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.573680 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.573964 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.574225 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:24Z","lastTransitionTime":"2025-11-24T11:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.677122 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.677161 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.677169 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.677183 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.677192 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:24Z","lastTransitionTime":"2025-11-24T11:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.780575 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.780967 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.780992 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.781016 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.781035 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:24Z","lastTransitionTime":"2025-11-24T11:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.883489 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.883533 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.883547 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.883564 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.883577 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:24Z","lastTransitionTime":"2025-11-24T11:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.986660 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.986702 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.986712 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.986728 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:24 crc kubenswrapper[4789]: I1124 11:31:24.986739 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:24Z","lastTransitionTime":"2025-11-24T11:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.024763 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.024808 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.024820 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.024836 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.024848 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:25Z","lastTransitionTime":"2025-11-24T11:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:25 crc kubenswrapper[4789]: E1124 11:31:25.039734 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4376b485-9285-482b-9f4e-acdea532ff82\\\",\\\"systemUUID\\\":\\\"48941845-60e3-4de0-ba49-51eec51285bb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:25Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.042770 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.042799 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.042809 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.042824 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.042836 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:25Z","lastTransitionTime":"2025-11-24T11:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:25 crc kubenswrapper[4789]: E1124 11:31:25.055578 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4376b485-9285-482b-9f4e-acdea532ff82\\\",\\\"systemUUID\\\":\\\"48941845-60e3-4de0-ba49-51eec51285bb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:25Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.058397 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.058431 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.058443 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.058476 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.058490 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:25Z","lastTransitionTime":"2025-11-24T11:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:25 crc kubenswrapper[4789]: E1124 11:31:25.071737 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4376b485-9285-482b-9f4e-acdea532ff82\\\",\\\"systemUUID\\\":\\\"48941845-60e3-4de0-ba49-51eec51285bb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:25Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.074331 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.074361 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.074371 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.074385 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.074396 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:25Z","lastTransitionTime":"2025-11-24T11:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:25 crc kubenswrapper[4789]: E1124 11:31:25.085816 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4376b485-9285-482b-9f4e-acdea532ff82\\\",\\\"systemUUID\\\":\\\"48941845-60e3-4de0-ba49-51eec51285bb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:25Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.089572 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.089631 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.089642 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.089692 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.089723 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:25Z","lastTransitionTime":"2025-11-24T11:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:25 crc kubenswrapper[4789]: E1124 11:31:25.103743 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4376b485-9285-482b-9f4e-acdea532ff82\\\",\\\"systemUUID\\\":\\\"48941845-60e3-4de0-ba49-51eec51285bb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:25Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:25 crc kubenswrapper[4789]: E1124 11:31:25.103852 4789 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.105726 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.105748 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.105758 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.105770 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.105779 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:25Z","lastTransitionTime":"2025-11-24T11:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.208097 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.208132 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.208144 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.208159 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.208172 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:25Z","lastTransitionTime":"2025-11-24T11:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.309974 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.310000 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.310008 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.310020 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.310030 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:25Z","lastTransitionTime":"2025-11-24T11:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.412520 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.412555 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.412564 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.412578 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.412585 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:25Z","lastTransitionTime":"2025-11-24T11:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.515203 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.515238 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.515246 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.515260 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.515272 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:25Z","lastTransitionTime":"2025-11-24T11:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.616522 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.616560 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.616571 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.616585 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.616595 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:25Z","lastTransitionTime":"2025-11-24T11:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.718366 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.718401 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.718412 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.718424 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.718434 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:25Z","lastTransitionTime":"2025-11-24T11:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.820735 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.820787 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.820795 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.820808 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.820818 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:25Z","lastTransitionTime":"2025-11-24T11:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.923316 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.923360 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.923371 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.923389 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:25 crc kubenswrapper[4789]: I1124 11:31:25.923400 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:25Z","lastTransitionTime":"2025-11-24T11:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.025760 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.025793 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.025801 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.025813 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.025823 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:26Z","lastTransitionTime":"2025-11-24T11:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.127872 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.127906 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.127916 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.127930 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.127942 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:26Z","lastTransitionTime":"2025-11-24T11:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.169160 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.169182 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:31:26 crc kubenswrapper[4789]: E1124 11:31:26.169258 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.169160 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:31:26 crc kubenswrapper[4789]: E1124 11:31:26.169308 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.169498 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:31:26 crc kubenswrapper[4789]: E1124 11:31:26.169555 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:31:26 crc kubenswrapper[4789]: E1124 11:31:26.169545 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.230279 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.230323 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.230334 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.230348 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.230360 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:26Z","lastTransitionTime":"2025-11-24T11:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.332574 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.332616 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.332630 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.332646 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.332658 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:26Z","lastTransitionTime":"2025-11-24T11:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.434881 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.434920 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.434929 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.434941 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.434951 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:26Z","lastTransitionTime":"2025-11-24T11:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.537817 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.537905 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.537935 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.537966 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.537988 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:26Z","lastTransitionTime":"2025-11-24T11:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.640553 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.640591 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.640599 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.640614 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.640624 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:26Z","lastTransitionTime":"2025-11-24T11:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.742549 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.742594 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.742603 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.742617 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.742627 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:26Z","lastTransitionTime":"2025-11-24T11:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.844731 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.844776 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.844785 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.844800 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.844810 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:26Z","lastTransitionTime":"2025-11-24T11:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.947584 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.947631 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.947643 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.947666 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:26 crc kubenswrapper[4789]: I1124 11:31:26.947679 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:26Z","lastTransitionTime":"2025-11-24T11:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.050171 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.050477 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.050561 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.050634 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.050706 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:27Z","lastTransitionTime":"2025-11-24T11:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.152529 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.152570 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.152579 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.152591 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.152599 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:27Z","lastTransitionTime":"2025-11-24T11:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.254974 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.255302 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.255382 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.255447 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.255539 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:27Z","lastTransitionTime":"2025-11-24T11:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.357683 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.357723 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.357733 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.357748 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.357758 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:27Z","lastTransitionTime":"2025-11-24T11:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.459707 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.459757 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.459769 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.459782 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.459790 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:27Z","lastTransitionTime":"2025-11-24T11:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.562181 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.562226 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.562237 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.562253 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.562263 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:27Z","lastTransitionTime":"2025-11-24T11:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.665296 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.665334 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.665380 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.665398 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.665409 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:27Z","lastTransitionTime":"2025-11-24T11:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.767727 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.767798 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.767818 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.767843 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.767860 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:27Z","lastTransitionTime":"2025-11-24T11:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.870102 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.870134 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.870144 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.870158 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.870169 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:27Z","lastTransitionTime":"2025-11-24T11:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.972284 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.972610 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.972698 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.972777 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:27 crc kubenswrapper[4789]: I1124 11:31:27.972859 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:27Z","lastTransitionTime":"2025-11-24T11:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.075450 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.075595 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.075619 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.075646 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.075663 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:28Z","lastTransitionTime":"2025-11-24T11:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.168693 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:31:28 crc kubenswrapper[4789]: E1124 11:31:28.169319 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.169086 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:31:28 crc kubenswrapper[4789]: E1124 11:31:28.169590 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.169121 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:31:28 crc kubenswrapper[4789]: E1124 11:31:28.169810 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.168746 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:31:28 crc kubenswrapper[4789]: E1124 11:31:28.170015 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.177806 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.177840 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.177850 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.177867 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.177878 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:28Z","lastTransitionTime":"2025-11-24T11:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.184335 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422480a045454133a17132666976f8e5a564759ab1bf7668e41ad1663eb4bc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dce8b517d8f914c50b708fd7d66e6e3796768ded1a0bcb0c5f575f124844c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:28Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.194280 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b941dfb57d7894426efab65a2f2f6a0cbb524c48c0657d493eefe51923f30711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:28Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.208604 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5fgg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"776a7cdb-6468-4e8a-8577-3535ff549781\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a9c256912e5f9308382925d83cd341ff711fdd9fce20f0c76d22f59033bfbf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ct4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5fgg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:28Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.218900 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c4a832-f0e4-481b-a474-3ecea86049f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb40689bf9e2d48e8dbd0827e82dc097464ab71edf0f871edc26ff8ed3508957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9czvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:28Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.231567 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc01f98a19f3885135cee8c8ee980f101ca61c40d316c0296bacfc3218400\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:28Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.243029 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:28Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.255635 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:28Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.269476 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:28Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.278554 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zthhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74a73ebd6641a79c50641db01a42eaf7842b9700926f302b4f5e938efa5d865f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpwcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zthhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:28Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.279825 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.279848 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.279859 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.279875 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.279887 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:28Z","lastTransitionTime":"2025-11-24T11:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.288133 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-s69rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1033d5e6-680c-4193-aade-8c3d801b0e3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h5sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h5sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-s69rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:28Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.299828 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719b0731-cabf-4883-bd19-bbe3786b4ac3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb0303ba3fd943ad92e8cffb4d8322537a9115a81f2d714c22eed182bc8a90a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d011633bdece1cc331c96ab10bafee76ec769fdad2e60b09b2224ad3cf655395\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8847f098f36612e1b18e6fa7e9d3ecd32ae6a0aef704d6ed7e06f9115d993bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136df7849a013cb5393a500a40fcbe252deae349ad3c0d1dbc4f7926c01ff528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136df7849a013cb5393a500a40fcbe252deae349ad3c0d1dbc4f7926c01ff528\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:28Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.312683 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:28Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.374597 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6d361cd-fbb3-466d-9026-4c685922072f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f654e0567288af612581e353fc5033f6afb865f923ec49fa06ef0fff099d0bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f654e0567288af612581e353fc5033f6afb865f923ec49fa06ef0fff099d0bec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:31:08Z\\\",\\\"message\\\":\\\".go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:08Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:31:08.963815 6358 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-9czvn\\\\nI1124 11:31:08.963817 6358 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-zthhc\\\\nI1124 11:31:08.963820 6358 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-9czvn in node crc\\\\nI1124 11:31:08.963825 6358 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-9czvn after 0 failed atte\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:31:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n4hd6_openshift-ovn-kubernetes(c6d361cd-fbb3-466d-9026-4c685922072f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4hd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:28Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.382112 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.382147 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.382157 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.382172 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.382183 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:28Z","lastTransitionTime":"2025-11-24T11:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.384164 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vztqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da80bfe1-36b3-4239-bf6e-a855a490290a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17faecc8b835016ac0c8868de42de9b0990ce6399926e949f319fc4a26a3257b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nz8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vztqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:28Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.395047 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:28Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.407584 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fcd7ef8bfab3cbd56ad3f1df7b1d8aaf1459411f27649c7cd12dcde866d14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bbbf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:28Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.417349 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jz2zx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c88057c-782b-4cc3-8243-828d959f4434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8b2f85ae9f76d8adf40a2018100916e9aace7877f1f10f26a147088cf44898d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmkqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b792d376da032b1887743c253b0109f14b255a30ef15032b261605d07de2f0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmkqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jz2zx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:28Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.483844 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.484120 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.484210 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.484306 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.484381 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:28Z","lastTransitionTime":"2025-11-24T11:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.586306 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.586345 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.586356 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.586371 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.586381 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:28Z","lastTransitionTime":"2025-11-24T11:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.688335 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.688578 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.688680 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.688779 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.688866 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:28Z","lastTransitionTime":"2025-11-24T11:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.791932 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.791964 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.791973 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.791989 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.791998 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:28Z","lastTransitionTime":"2025-11-24T11:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.894403 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.894454 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.894481 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.894494 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.894502 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:28Z","lastTransitionTime":"2025-11-24T11:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.997788 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.997977 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.998060 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.998131 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:28 crc kubenswrapper[4789]: I1124 11:31:28.998203 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:28Z","lastTransitionTime":"2025-11-24T11:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.100417 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.100487 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.100498 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.100516 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.100525 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:29Z","lastTransitionTime":"2025-11-24T11:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.181368 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.203546 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.203837 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.203933 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.204021 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.204106 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:29Z","lastTransitionTime":"2025-11-24T11:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.306597 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.306670 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.306681 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.306697 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.306709 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:29Z","lastTransitionTime":"2025-11-24T11:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.409480 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.409515 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.409525 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.409537 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.409547 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:29Z","lastTransitionTime":"2025-11-24T11:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.513045 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.513376 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.513587 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.513765 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.513903 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:29Z","lastTransitionTime":"2025-11-24T11:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.616477 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.616511 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.616522 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.616536 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.616546 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:29Z","lastTransitionTime":"2025-11-24T11:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.718645 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.718684 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.718694 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.718708 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.718719 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:29Z","lastTransitionTime":"2025-11-24T11:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.821370 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.821408 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.821419 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.821434 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.821443 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:29Z","lastTransitionTime":"2025-11-24T11:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.924126 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.924160 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.924171 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.924186 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:29 crc kubenswrapper[4789]: I1124 11:31:29.924197 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:29Z","lastTransitionTime":"2025-11-24T11:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.026496 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.026561 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.026576 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.026594 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.026603 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:30Z","lastTransitionTime":"2025-11-24T11:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.129010 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.129257 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.129344 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.129429 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.129545 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:30Z","lastTransitionTime":"2025-11-24T11:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.169081 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.169084 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.169223 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.169094 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:31:30 crc kubenswrapper[4789]: E1124 11:31:30.169360 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:31:30 crc kubenswrapper[4789]: E1124 11:31:30.169368 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:31:30 crc kubenswrapper[4789]: E1124 11:31:30.169554 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:31:30 crc kubenswrapper[4789]: E1124 11:31:30.169609 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.232186 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.232218 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.232227 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.232239 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.232247 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:30Z","lastTransitionTime":"2025-11-24T11:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.334791 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.334836 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.334846 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.334860 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.334874 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:30Z","lastTransitionTime":"2025-11-24T11:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.362375 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1033d5e6-680c-4193-aade-8c3d801b0e3f-metrics-certs\") pod \"network-metrics-daemon-s69rz\" (UID: \"1033d5e6-680c-4193-aade-8c3d801b0e3f\") " pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:31:30 crc kubenswrapper[4789]: E1124 11:31:30.362611 4789 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:31:30 crc kubenswrapper[4789]: E1124 11:31:30.362694 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1033d5e6-680c-4193-aade-8c3d801b0e3f-metrics-certs podName:1033d5e6-680c-4193-aade-8c3d801b0e3f nodeName:}" failed. No retries permitted until 2025-11-24 11:32:02.362671368 +0000 UTC m=+104.945142787 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1033d5e6-680c-4193-aade-8c3d801b0e3f-metrics-certs") pod "network-metrics-daemon-s69rz" (UID: "1033d5e6-680c-4193-aade-8c3d801b0e3f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.437218 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.437258 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.437275 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.437297 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.437316 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:30Z","lastTransitionTime":"2025-11-24T11:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.539728 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.539775 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.539786 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.539802 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.539814 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:30Z","lastTransitionTime":"2025-11-24T11:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.642351 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.642414 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.642427 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.642446 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.642486 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:30Z","lastTransitionTime":"2025-11-24T11:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.744927 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.744961 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.744970 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.744983 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.744991 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:30Z","lastTransitionTime":"2025-11-24T11:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.853181 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.853503 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.853601 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.853687 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.853765 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:30Z","lastTransitionTime":"2025-11-24T11:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.955943 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.955982 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.955992 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.956005 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:30 crc kubenswrapper[4789]: I1124 11:31:30.956015 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:30Z","lastTransitionTime":"2025-11-24T11:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.059388 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.059570 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.059597 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.059618 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.059632 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:31Z","lastTransitionTime":"2025-11-24T11:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.161867 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.161912 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.161923 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.161939 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.161950 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:31Z","lastTransitionTime":"2025-11-24T11:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.264287 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.264319 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.264329 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.264342 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.264351 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:31Z","lastTransitionTime":"2025-11-24T11:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.366778 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.366849 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.366861 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.366877 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.366887 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:31Z","lastTransitionTime":"2025-11-24T11:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.469605 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.469649 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.469659 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.469673 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.469682 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:31Z","lastTransitionTime":"2025-11-24T11:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.571837 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.571909 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.571920 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.571935 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.571947 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:31Z","lastTransitionTime":"2025-11-24T11:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.674358 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.674752 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.674967 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.675112 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.675261 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:31Z","lastTransitionTime":"2025-11-24T11:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.778051 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.778370 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.778560 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.778770 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.778920 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:31Z","lastTransitionTime":"2025-11-24T11:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.882346 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.882766 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.882909 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.883011 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.883132 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:31Z","lastTransitionTime":"2025-11-24T11:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.985780 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.985819 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.985831 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.985846 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:31 crc kubenswrapper[4789]: I1124 11:31:31.985858 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:31Z","lastTransitionTime":"2025-11-24T11:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.088745 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.088805 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.088825 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.088848 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.088864 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:32Z","lastTransitionTime":"2025-11-24T11:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.169178 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.169213 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:31:32 crc kubenswrapper[4789]: E1124 11:31:32.169321 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.169442 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:31:32 crc kubenswrapper[4789]: E1124 11:31:32.169621 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.170054 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:31:32 crc kubenswrapper[4789]: E1124 11:31:32.170439 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:31:32 crc kubenswrapper[4789]: E1124 11:31:32.170878 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.192273 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.192332 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.192349 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.192372 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.192390 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:32Z","lastTransitionTime":"2025-11-24T11:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.295307 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.295346 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.295356 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.295370 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.295379 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:32Z","lastTransitionTime":"2025-11-24T11:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.397585 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.397830 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.397949 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.398049 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.398136 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:32Z","lastTransitionTime":"2025-11-24T11:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.504321 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.504364 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.504375 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.504391 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.504403 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:32Z","lastTransitionTime":"2025-11-24T11:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.595442 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5fgg5_776a7cdb-6468-4e8a-8577-3535ff549781/kube-multus/0.log" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.595539 4789 generic.go:334] "Generic (PLEG): container finished" podID="776a7cdb-6468-4e8a-8577-3535ff549781" containerID="7a9c256912e5f9308382925d83cd341ff711fdd9fce20f0c76d22f59033bfbf8" exitCode=1 Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.595578 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5fgg5" event={"ID":"776a7cdb-6468-4e8a-8577-3535ff549781","Type":"ContainerDied","Data":"7a9c256912e5f9308382925d83cd341ff711fdd9fce20f0c76d22f59033bfbf8"} Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.596065 4789 scope.go:117] "RemoveContainer" containerID="7a9c256912e5f9308382925d83cd341ff711fdd9fce20f0c76d22f59033bfbf8" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.606291 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.606319 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.606327 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.606339 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.606347 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:32Z","lastTransitionTime":"2025-11-24T11:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.620778 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6d361cd-fbb3-466d-9026-4c685922072f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f654e0567288af612581e353fc5033f6afb865f923ec49fa06ef0fff099d0bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f654e0567288af612581e353fc5033f6afb865f923ec49fa06ef0fff099d0bec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:31:08Z\\\",\\\"message\\\":\\\".go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:08Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:31:08.963815 6358 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-9czvn\\\\nI1124 11:31:08.963817 6358 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-zthhc\\\\nI1124 11:31:08.963820 6358 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-9czvn in node crc\\\\nI1124 11:31:08.963825 6358 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-9czvn after 0 failed atte\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:31:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n4hd6_openshift-ovn-kubernetes(c6d361cd-fbb3-466d-9026-4c685922072f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4hd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.633989 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vztqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da80bfe1-36b3-4239-bf6e-a855a490290a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17faecc8b835016ac0c8868de42de9b0990ce6399926e949f319fc4a26a3257b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nz8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vztqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.644774 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zthhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74a73ebd6641a79c50641db01a42eaf7842b9700926f302b4f5e938efa5d865f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpwcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zthhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.654983 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-s69rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1033d5e6-680c-4193-aade-8c3d801b0e3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h5sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h5sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-s69rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.666360 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719b0731-cabf-4883-bd19-bbe3786b4ac3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb0303ba3fd943ad92e8cffb4d8322537a9115a81f2d714c22eed182bc8a90a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d011633bdece1cc331c96ab10bafee76ec769fdad2e60b09b2224ad3cf655395\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8847f098f36612e1b18e6fa7e9d3ecd32ae6a0aef704d6ed7e06f9115d993bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136df7849a013cb5393a500a40fcbe252deae349ad3c0d1dbc4f7926c01ff528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136df7849a013cb5393a500a40fcbe252deae349ad3c0d1dbc4f7926c01ff528\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.677879 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.689674 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fcd7ef8bfab3cbd56ad3f1df7b1d8aaf1459411f27649c7cd12dcde866d14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bbbf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.701888 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jz2zx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c88057c-782b-4cc3-8243-828d959f4434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8b2f85ae9f76d8adf40a2018100916e9aace7877f1f10f26a147088cf44898d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmkqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b792d376da032b1887743c253b0109f14b255a30ef15032b261605d07de2f0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmkqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jz2zx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.710222 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.710437 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.710529 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.710602 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.710677 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:32Z","lastTransitionTime":"2025-11-24T11:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.714106 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.727362 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.740272 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.750644 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422480a045454133a17132666976f8e5a564759ab1bf7668e41ad1663eb4bc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dce8b517d8f914c50b708fd7d66e6e3796768ded1a0bcb0c5f575f124844c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.760282 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b941dfb57d7894426efab65a2f2f6a0cbb524c48c0657d493eefe51923f30711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.771587 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5fgg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"776a7cdb-6468-4e8a-8577-3535ff549781\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a9c256912e5f9308382925d83cd341ff711fdd9fce20f0c76d22f59033bfbf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a9c256912e5f9308382925d83cd341ff711fdd9fce20f0c76d22f59033bfbf8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:31:31Z\\\",\\\"message\\\":\\\"2025-11-24T11:30:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d056051d-2323-4561-b0c2-c4c6ba6f431e\\\\n2025-11-24T11:30:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d056051d-2323-4561-b0c2-c4c6ba6f431e to /host/opt/cni/bin/\\\\n2025-11-24T11:30:46Z [verbose] multus-daemon started\\\\n2025-11-24T11:30:46Z [verbose] Readiness Indicator file check\\\\n2025-11-24T11:31:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ct4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5fgg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.781125 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c4a832-f0e4-481b-a474-3ecea86049f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb40689bf9e2d48e8dbd0827e82dc097464ab71edf0f871edc26ff8ed3508957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9czvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.790623 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad91557a-c8cf-4dcd-b434-48f7cdbf9955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edb7c8772394f7e4e2a72f2f354cf4b45d4e4ec2c5897c415583c26012e4508e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d4c69ca57fd2625092ab049c4cf09c515edaedf5219818d8b86d1405fbf9f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54d4c69ca57fd2625092ab049c4cf09c515edaedf5219818d8b86d1405fbf9f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.804266 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc01f98a19f3885135cee8c8ee980f101ca61c40d316c0296bacfc3218400\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.812649 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.812678 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.812685 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.812698 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.812710 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:32Z","lastTransitionTime":"2025-11-24T11:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.817479 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:32Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.914789 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.914821 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.914829 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.914841 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:32 crc kubenswrapper[4789]: I1124 11:31:32.914850 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:32Z","lastTransitionTime":"2025-11-24T11:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.017537 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.017585 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.017603 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.017624 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.017641 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:33Z","lastTransitionTime":"2025-11-24T11:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.120701 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.120802 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.120821 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.120845 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.120862 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:33Z","lastTransitionTime":"2025-11-24T11:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.223012 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.223067 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.223126 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.223147 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.223161 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:33Z","lastTransitionTime":"2025-11-24T11:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.326006 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.326044 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.326052 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.326068 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.326077 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:33Z","lastTransitionTime":"2025-11-24T11:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.429773 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.429839 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.429860 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.429910 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.429932 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:33Z","lastTransitionTime":"2025-11-24T11:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.533438 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.533544 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.533562 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.533589 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.533607 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:33Z","lastTransitionTime":"2025-11-24T11:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.601517 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5fgg5_776a7cdb-6468-4e8a-8577-3535ff549781/kube-multus/0.log" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.601585 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5fgg5" event={"ID":"776a7cdb-6468-4e8a-8577-3535ff549781","Type":"ContainerStarted","Data":"d61abcc33b471ae4b6dd594629a2287b59f66577b200848232023fa03a32aad1"} Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.623724 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c4a832-f0e4-481b-a474-3ecea86049f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb40689bf9e2d48e8dbd0827e82dc097464ab71edf0f871edc26ff8ed3508957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9czvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.640299 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.640371 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.640391 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.640419 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.640445 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:33Z","lastTransitionTime":"2025-11-24T11:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.643929 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad91557a-c8cf-4dcd-b434-48f7cdbf9955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edb7c8772394f7e4e2a72f2f354cf4b45d4e4ec2c5897c415583c26012e4508e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d4c69ca57fd2625092ab049c4cf09c515edaedf5219818d8b86d1405fbf9f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54d4c69ca57fd2625092ab049c4cf09c515edaedf5219818d8b86d1405fbf9f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.669153 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc01f98a19f3885135cee8c8ee980f101ca61c40d316c0296bacfc3218400\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.691753 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.713287 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.735727 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422480a045454133a17132666976f8e5a564759ab1bf7668e41ad1663eb4bc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dce8b517d8f914c50b708fd7d66e6e3796768ded1a0bcb0c5f575f124844c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.744153 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.744222 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.744240 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.744262 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.744279 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:33Z","lastTransitionTime":"2025-11-24T11:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.752987 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b941dfb57d7894426efab65a2f2f6a0cbb524c48c0657d493eefe51923f30711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.771263 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5fgg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"776a7cdb-6468-4e8a-8577-3535ff549781\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d61abcc33b471ae4b6dd594629a2287b59f66577b200848232023fa03a32aad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a9c256912e5f9308382925d83cd341ff711fdd9fce20f0c76d22f59033bfbf8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:31:31Z\\\",\\\"message\\\":\\\"2025-11-24T11:30:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d056051d-2323-4561-b0c2-c4c6ba6f431e\\\\n2025-11-24T11:30:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d056051d-2323-4561-b0c2-c4c6ba6f431e to /host/opt/cni/bin/\\\\n2025-11-24T11:30:46Z [verbose] multus-daemon started\\\\n2025-11-24T11:30:46Z [verbose] Readiness Indicator file check\\\\n2025-11-24T11:31:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:31:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ct4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5fgg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.787818 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.800664 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719b0731-cabf-4883-bd19-bbe3786b4ac3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb0303ba3fd943ad92e8cffb4d8322537a9115a81f2d714c22eed182bc8a90a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d011633bdece1cc331c96ab10bafee76ec769fdad2e60b09b2224ad3cf655395\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8847f098f36612e1b18e6fa7e9d3ecd32ae6a0aef704d6ed7e06f9115d993bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136df7849a013cb5393a500a40fcbe252deae349ad3c0d1dbc4f7926c01ff528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136df7849a013cb5393a500a40fcbe252deae349ad3c0d1dbc4f7926c01ff528\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.813239 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.834952 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6d361cd-fbb3-466d-9026-4c685922072f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f654e0567288af612581e353fc5033f6afb865f923ec49fa06ef0fff099d0bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f654e0567288af612581e353fc5033f6afb865f923ec49fa06ef0fff099d0bec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:31:08Z\\\",\\\"message\\\":\\\".go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:08Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:31:08.963815 6358 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-9czvn\\\\nI1124 11:31:08.963817 6358 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-zthhc\\\\nI1124 11:31:08.963820 6358 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-9czvn in node crc\\\\nI1124 11:31:08.963825 6358 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-9czvn after 0 failed atte\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:31:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n4hd6_openshift-ovn-kubernetes(c6d361cd-fbb3-466d-9026-4c685922072f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4hd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.846086 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vztqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da80bfe1-36b3-4239-bf6e-a855a490290a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17faecc8b835016ac0c8868de42de9b0990ce6399926e949f319fc4a26a3257b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nz8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vztqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.847523 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.847580 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.847593 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.847612 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.847626 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:33Z","lastTransitionTime":"2025-11-24T11:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.858130 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zthhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74a73ebd6641a79c50641db01a42eaf7842b9700926f302b4f5e938efa5d865f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpwcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zthhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.869502 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-s69rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1033d5e6-680c-4193-aade-8c3d801b0e3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h5sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h5sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-s69rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.885367 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.899438 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fcd7ef8bfab3cbd56ad3f1df7b1d8aaf1459411f27649c7cd12dcde866d14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bbbf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.910717 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jz2zx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c88057c-782b-4cc3-8243-828d959f4434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8b2f85ae9f76d8adf40a2018100916e9aace7877f1f10f26a147088cf44898d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmkqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b792d376da032b1887743c253b0109f14b255a30ef15032b261605d07de2f0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmkqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jz2zx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:33Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.950229 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.950273 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.950284 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.950300 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:33 crc kubenswrapper[4789]: I1124 11:31:33.950311 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:33Z","lastTransitionTime":"2025-11-24T11:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.053197 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.053267 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.053286 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.053311 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.053329 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:34Z","lastTransitionTime":"2025-11-24T11:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.156709 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.156772 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.156789 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.156813 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.156831 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:34Z","lastTransitionTime":"2025-11-24T11:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.168297 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.168374 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.168368 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.168333 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:31:34 crc kubenswrapper[4789]: E1124 11:31:34.168576 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:31:34 crc kubenswrapper[4789]: E1124 11:31:34.168766 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:31:34 crc kubenswrapper[4789]: E1124 11:31:34.168956 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:31:34 crc kubenswrapper[4789]: E1124 11:31:34.169098 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.259002 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.259068 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.259086 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.259113 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.259132 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:34Z","lastTransitionTime":"2025-11-24T11:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.361829 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.361887 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.361910 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.361936 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.361953 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:34Z","lastTransitionTime":"2025-11-24T11:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.464976 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.465038 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.465057 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.465081 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.465097 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:34Z","lastTransitionTime":"2025-11-24T11:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.567819 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.567881 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.567898 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.567921 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.567943 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:34Z","lastTransitionTime":"2025-11-24T11:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.671366 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.671420 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.671437 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.671504 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.671529 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:34Z","lastTransitionTime":"2025-11-24T11:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.775196 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.775324 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.775352 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.775380 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.775402 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:34Z","lastTransitionTime":"2025-11-24T11:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.877755 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.877793 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.877803 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.877819 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.877830 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:34Z","lastTransitionTime":"2025-11-24T11:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.980274 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.980311 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.980322 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.980337 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:34 crc kubenswrapper[4789]: I1124 11:31:34.980350 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:34Z","lastTransitionTime":"2025-11-24T11:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.083512 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.083575 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.083596 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.083622 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.083642 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:35Z","lastTransitionTime":"2025-11-24T11:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.132847 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.132912 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.132936 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.132963 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.132985 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:35Z","lastTransitionTime":"2025-11-24T11:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:35 crc kubenswrapper[4789]: E1124 11:31:35.154931 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4376b485-9285-482b-9f4e-acdea532ff82\\\",\\\"systemUUID\\\":\\\"48941845-60e3-4de0-ba49-51eec51285bb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.161389 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.161537 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.161566 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.161596 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.161619 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:35Z","lastTransitionTime":"2025-11-24T11:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.170348 4789 scope.go:117] "RemoveContainer" containerID="f654e0567288af612581e353fc5033f6afb865f923ec49fa06ef0fff099d0bec" Nov 24 11:31:35 crc kubenswrapper[4789]: E1124 11:31:35.183940 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4376b485-9285-482b-9f4e-acdea532ff82\\\",\\\"systemUUID\\\":\\\"48941845-60e3-4de0-ba49-51eec51285bb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.189867 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.192952 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.192972 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.192997 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.193012 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:35Z","lastTransitionTime":"2025-11-24T11:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:35 crc kubenswrapper[4789]: E1124 11:31:35.212211 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4376b485-9285-482b-9f4e-acdea532ff82\\\",\\\"systemUUID\\\":\\\"48941845-60e3-4de0-ba49-51eec51285bb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.215823 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.215862 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.215871 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.215885 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.215895 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:35Z","lastTransitionTime":"2025-11-24T11:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:35 crc kubenswrapper[4789]: E1124 11:31:35.235151 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4376b485-9285-482b-9f4e-acdea532ff82\\\",\\\"systemUUID\\\":\\\"48941845-60e3-4de0-ba49-51eec51285bb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.239628 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.239678 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.239692 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.239713 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.239728 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:35Z","lastTransitionTime":"2025-11-24T11:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:35 crc kubenswrapper[4789]: E1124 11:31:35.253839 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4376b485-9285-482b-9f4e-acdea532ff82\\\",\\\"systemUUID\\\":\\\"48941845-60e3-4de0-ba49-51eec51285bb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:35 crc kubenswrapper[4789]: E1124 11:31:35.254043 4789 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.255445 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.255518 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.255538 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.255564 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.255583 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:35Z","lastTransitionTime":"2025-11-24T11:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.358554 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.358591 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.358602 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.358617 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.358628 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:35Z","lastTransitionTime":"2025-11-24T11:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.460630 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.460674 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.460686 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.460704 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.460717 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:35Z","lastTransitionTime":"2025-11-24T11:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.563879 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.563924 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.563933 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.563948 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.563959 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:35Z","lastTransitionTime":"2025-11-24T11:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.614846 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4hd6_c6d361cd-fbb3-466d-9026-4c685922072f/ovnkube-controller/2.log" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.618334 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" event={"ID":"c6d361cd-fbb3-466d-9026-4c685922072f","Type":"ContainerStarted","Data":"ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7"} Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.618880 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.637848 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.667140 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.667454 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.667621 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.667768 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.667916 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:35Z","lastTransitionTime":"2025-11-24T11:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.672293 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6d361cd-fbb3-466d-9026-4c685922072f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f654e0567288af612581e353fc5033f6afb865f923ec49fa06ef0fff099d0bec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:31:08Z\\\",\\\"message\\\":\\\".go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:08Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:31:08.963815 6358 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-9czvn\\\\nI1124 11:31:08.963817 6358 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-zthhc\\\\nI1124 11:31:08.963820 6358 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-9czvn in node crc\\\\nI1124 11:31:08.963825 6358 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-9czvn after 0 failed atte\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:31:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4hd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.694499 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vztqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da80bfe1-36b3-4239-bf6e-a855a490290a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17faecc8b835016ac0c8868de42de9b0990ce6399926e949f319fc4a26a3257b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nz8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vztqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.718002 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zthhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74a73ebd6641a79c50641db01a42eaf7842b9700926f302b4f5e938efa5d865f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpwcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zthhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.734518 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-s69rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1033d5e6-680c-4193-aade-8c3d801b0e3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h5sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h5sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-s69rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.753210 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719b0731-cabf-4883-bd19-bbe3786b4ac3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb0303ba3fd943ad92e8cffb4d8322537a9115a81f2d714c22eed182bc8a90a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d011633bdece1cc331c96ab10bafee76ec769fdad2e60b09b2224ad3cf655395\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8847f098f36612e1b18e6fa7e9d3ecd32ae6a0aef704d6ed7e06f9115d993bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136df7849a013cb5393a500a40fcbe252deae349ad3c0d1dbc4f7926c01ff528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136df7849a013cb5393a500a40fcbe252deae349ad3c0d1dbc4f7926c01ff528\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.770283 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.770528 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.770548 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.770559 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.770573 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.770584 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:35Z","lastTransitionTime":"2025-11-24T11:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.784624 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fcd7ef8bfab3cbd56ad3f1df7b1d8aaf1459411f27649c7cd12dcde866d14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bbbf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.797838 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jz2zx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c88057c-782b-4cc3-8243-828d959f4434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8b2f85ae9f76d8adf40a2018100916e9aace7877f1f10f26a147088cf44898d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmkqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b792d376da032b1887743c253b0109f14b255a30ef15032b261605d07de2f0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmkqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jz2zx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.811025 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc01f98a19f3885135cee8c8ee980f101ca61c40d316c0296bacfc3218400\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.823632 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.836185 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.852266 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422480a045454133a17132666976f8e5a564759ab1bf7668e41ad1663eb4bc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dce8b517d8f914c50b708fd7d66e6e3796768ded1a0bcb0c5f575f124844c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.864680 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b941dfb57d7894426efab65a2f2f6a0cbb524c48c0657d493eefe51923f30711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.872677 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.872711 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.872722 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.872737 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.872748 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:35Z","lastTransitionTime":"2025-11-24T11:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.882667 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5fgg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"776a7cdb-6468-4e8a-8577-3535ff549781\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d61abcc33b471ae4b6dd594629a2287b59f66577b200848232023fa03a32aad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a9c256912e5f9308382925d83cd341ff711fdd9fce20f0c76d22f59033bfbf8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:31:31Z\\\",\\\"message\\\":\\\"2025-11-24T11:30:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d056051d-2323-4561-b0c2-c4c6ba6f431e\\\\n2025-11-24T11:30:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d056051d-2323-4561-b0c2-c4c6ba6f431e to /host/opt/cni/bin/\\\\n2025-11-24T11:30:46Z [verbose] multus-daemon started\\\\n2025-11-24T11:30:46Z [verbose] Readiness Indicator file check\\\\n2025-11-24T11:31:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:31:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ct4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5fgg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.897245 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c4a832-f0e4-481b-a474-3ecea86049f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb40689bf9e2d48e8dbd0827e82dc097464ab71edf0f871edc26ff8ed3508957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9czvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.909320 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad91557a-c8cf-4dcd-b434-48f7cdbf9955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edb7c8772394f7e4e2a72f2f354cf4b45d4e4ec2c5897c415583c26012e4508e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d4c69ca57fd2625092ab049c4cf09c515edaedf5219818d8b86d1405fbf9f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54d4c69ca57fd2625092ab049c4cf09c515edaedf5219818d8b86d1405fbf9f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.920101 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:35Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.975099 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.975133 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.975142 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.975155 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:35 crc kubenswrapper[4789]: I1124 11:31:35.975164 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:35Z","lastTransitionTime":"2025-11-24T11:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.079810 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.080046 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.080158 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.080252 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.080323 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:36Z","lastTransitionTime":"2025-11-24T11:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.169166 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.169224 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.169199 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.169188 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:31:36 crc kubenswrapper[4789]: E1124 11:31:36.169383 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:31:36 crc kubenswrapper[4789]: E1124 11:31:36.169513 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:31:36 crc kubenswrapper[4789]: E1124 11:31:36.169610 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:31:36 crc kubenswrapper[4789]: E1124 11:31:36.169677 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.181784 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.181820 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.181831 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.181844 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.181854 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:36Z","lastTransitionTime":"2025-11-24T11:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.285181 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.285213 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.285221 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.285234 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.285243 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:36Z","lastTransitionTime":"2025-11-24T11:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.387184 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.387249 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.387273 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.387308 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.387336 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:36Z","lastTransitionTime":"2025-11-24T11:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.490534 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.490603 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.490626 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.490654 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.490678 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:36Z","lastTransitionTime":"2025-11-24T11:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.596480 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.596519 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.596531 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.596547 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.596557 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:36Z","lastTransitionTime":"2025-11-24T11:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.625115 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4hd6_c6d361cd-fbb3-466d-9026-4c685922072f/ovnkube-controller/3.log" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.626037 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4hd6_c6d361cd-fbb3-466d-9026-4c685922072f/ovnkube-controller/2.log" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.630969 4789 generic.go:334] "Generic (PLEG): container finished" podID="c6d361cd-fbb3-466d-9026-4c685922072f" containerID="ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7" exitCode=1 Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.631045 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" event={"ID":"c6d361cd-fbb3-466d-9026-4c685922072f","Type":"ContainerDied","Data":"ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7"} Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.631112 4789 scope.go:117] "RemoveContainer" containerID="f654e0567288af612581e353fc5033f6afb865f923ec49fa06ef0fff099d0bec" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.632492 4789 scope.go:117] "RemoveContainer" containerID="ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7" Nov 24 11:31:36 crc kubenswrapper[4789]: E1124 11:31:36.632832 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n4hd6_openshift-ovn-kubernetes(c6d361cd-fbb3-466d-9026-4c685922072f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.657610 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.674898 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-s69rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1033d5e6-680c-4193-aade-8c3d801b0e3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h5sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h5sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-s69rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.689870 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719b0731-cabf-4883-bd19-bbe3786b4ac3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb0303ba3fd943ad92e8cffb4d8322537a9115a81f2d714c22eed182bc8a90a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d011633bdece1cc331c96ab10bafee76ec769fdad2e60b09b2224ad3cf655395\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8847f098f36612e1b18e6fa7e9d3ecd32ae6a0aef704d6ed7e06f9115d993bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136df7849a013cb5393a500a40fcbe252deae349ad3c0d1dbc4f7926c01ff528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136df7849a013cb5393a500a40fcbe252deae349ad3c0d1dbc4f7926c01ff528\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.699126 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.699166 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.699180 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.699199 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.699214 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:36Z","lastTransitionTime":"2025-11-24T11:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.709874 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.730884 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6d361cd-fbb3-466d-9026-4c685922072f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f654e0567288af612581e353fc5033f6afb865f923ec49fa06ef0fff099d0bec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:31:08Z\\\",\\\"message\\\":\\\".go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:08Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:31:08.963815 6358 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-9czvn\\\\nI1124 11:31:08.963817 6358 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-zthhc\\\\nI1124 11:31:08.963820 6358 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-9czvn in node crc\\\\nI1124 11:31:08.963825 6358 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-9czvn after 0 failed atte\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:31:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:31:36Z\\\",\\\"message\\\":\\\"nshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1124 11:31:36.069562 6717 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-5fgg5 in node crc\\\\nI1124 11:31:36.069564 6717 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1124 11:31:36.069570 6717 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-5fgg5 after 0 failed attempt(s)\\\\nI1124 11:31:36.069572 6717 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1124 11:31:36.069576 6717 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-5fgg5\\\\nF1124 11:31:36.069447 6717 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-i\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:31:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4hd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.743438 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vztqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da80bfe1-36b3-4239-bf6e-a855a490290a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17faecc8b835016ac0c8868de42de9b0990ce6399926e949f319fc4a26a3257b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nz8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vztqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.765069 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zthhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74a73ebd6641a79c50641db01a42eaf7842b9700926f302b4f5e938efa5d865f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpwcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zthhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.785872 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.801440 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.801495 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.801507 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.801524 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.801534 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:36Z","lastTransitionTime":"2025-11-24T11:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.809581 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fcd7ef8bfab3cbd56ad3f1df7b1d8aaf1459411f27649c7cd12dcde866d14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bbbf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.824879 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jz2zx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c88057c-782b-4cc3-8243-828d959f4434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8b2f85ae9f76d8adf40a2018100916e9aace7877f1f10f26a147088cf44898d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmkqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b792d376da032b1887743c253b0109f14b255a30ef15032b261605d07de2f0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmkqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jz2zx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.843241 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b941dfb57d7894426efab65a2f2f6a0cbb524c48c0657d493eefe51923f30711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.861133 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5fgg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"776a7cdb-6468-4e8a-8577-3535ff549781\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d61abcc33b471ae4b6dd594629a2287b59f66577b200848232023fa03a32aad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a9c256912e5f9308382925d83cd341ff711fdd9fce20f0c76d22f59033bfbf8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:31:31Z\\\",\\\"message\\\":\\\"2025-11-24T11:30:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d056051d-2323-4561-b0c2-c4c6ba6f431e\\\\n2025-11-24T11:30:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d056051d-2323-4561-b0c2-c4c6ba6f431e to /host/opt/cni/bin/\\\\n2025-11-24T11:30:46Z [verbose] multus-daemon started\\\\n2025-11-24T11:30:46Z [verbose] Readiness Indicator file check\\\\n2025-11-24T11:31:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:31:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ct4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5fgg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.876170 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c4a832-f0e4-481b-a474-3ecea86049f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb40689bf9e2d48e8dbd0827e82dc097464ab71edf0f871edc26ff8ed3508957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9czvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.889556 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad91557a-c8cf-4dcd-b434-48f7cdbf9955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edb7c8772394f7e4e2a72f2f354cf4b45d4e4ec2c5897c415583c26012e4508e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d4c69ca57fd2625092ab049c4cf09c515edaedf5219818d8b86d1405fbf9f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54d4c69ca57fd2625092ab049c4cf09c515edaedf5219818d8b86d1405fbf9f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.904163 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.904222 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.904237 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.904262 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.904277 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:36Z","lastTransitionTime":"2025-11-24T11:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.909756 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc01f98a19f3885135cee8c8ee980f101ca61c40d316c0296bacfc3218400\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.923943 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.938356 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:36 crc kubenswrapper[4789]: I1124 11:31:36.952847 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422480a045454133a17132666976f8e5a564759ab1bf7668e41ad1663eb4bc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dce8b517d8f914c50b708fd7d66e6e3796768ded1a0bcb0c5f575f124844c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:36Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.006994 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.007072 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.007094 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.007125 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.007147 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:37Z","lastTransitionTime":"2025-11-24T11:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.110892 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.110962 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.110979 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.111001 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.111018 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:37Z","lastTransitionTime":"2025-11-24T11:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.214511 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.214606 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.214631 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.214666 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.214688 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:37Z","lastTransitionTime":"2025-11-24T11:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.318169 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.318228 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.318245 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.318268 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.318285 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:37Z","lastTransitionTime":"2025-11-24T11:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.421396 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.421514 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.421556 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.421579 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.421596 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:37Z","lastTransitionTime":"2025-11-24T11:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.525388 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.525470 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.525480 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.525494 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.525504 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:37Z","lastTransitionTime":"2025-11-24T11:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.628781 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.628888 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.628952 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.628980 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.629040 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:37Z","lastTransitionTime":"2025-11-24T11:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.638878 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4hd6_c6d361cd-fbb3-466d-9026-4c685922072f/ovnkube-controller/3.log" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.646062 4789 scope.go:117] "RemoveContainer" containerID="ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7" Nov 24 11:31:37 crc kubenswrapper[4789]: E1124 11:31:37.646349 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n4hd6_openshift-ovn-kubernetes(c6d361cd-fbb3-466d-9026-4c685922072f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.668000 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.695958 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fcd7ef8bfab3cbd56ad3f1df7b1d8aaf1459411f27649c7cd12dcde866d14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bbbf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.717665 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jz2zx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c88057c-782b-4cc3-8243-828d959f4434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8b2f85ae9f76d8adf40a2018100916e9aace7877f1f10f26a147088cf44898d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmkqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b792d376da032b1887743c253b0109f14b255a30ef15032b261605d07de2f0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmkqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jz2zx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.731947 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.732082 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.732109 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.732136 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.732154 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:37Z","lastTransitionTime":"2025-11-24T11:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.734578 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b941dfb57d7894426efab65a2f2f6a0cbb524c48c0657d493eefe51923f30711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.754940 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5fgg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"776a7cdb-6468-4e8a-8577-3535ff549781\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d61abcc33b471ae4b6dd594629a2287b59f66577b200848232023fa03a32aad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a9c256912e5f9308382925d83cd341ff711fdd9fce20f0c76d22f59033bfbf8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:31:31Z\\\",\\\"message\\\":\\\"2025-11-24T11:30:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d056051d-2323-4561-b0c2-c4c6ba6f431e\\\\n2025-11-24T11:30:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d056051d-2323-4561-b0c2-c4c6ba6f431e to /host/opt/cni/bin/\\\\n2025-11-24T11:30:46Z [verbose] multus-daemon started\\\\n2025-11-24T11:30:46Z [verbose] Readiness Indicator file check\\\\n2025-11-24T11:31:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:31:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ct4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5fgg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.772788 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c4a832-f0e4-481b-a474-3ecea86049f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb40689bf9e2d48e8dbd0827e82dc097464ab71edf0f871edc26ff8ed3508957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9czvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.785375 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad91557a-c8cf-4dcd-b434-48f7cdbf9955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edb7c8772394f7e4e2a72f2f354cf4b45d4e4ec2c5897c415583c26012e4508e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d4c69ca57fd2625092ab049c4cf09c515edaedf5219818d8b86d1405fbf9f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54d4c69ca57fd2625092ab049c4cf09c515edaedf5219818d8b86d1405fbf9f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.804390 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc01f98a19f3885135cee8c8ee980f101ca61c40d316c0296bacfc3218400\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.823135 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.838594 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.838638 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.838648 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.838665 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.838677 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:37Z","lastTransitionTime":"2025-11-24T11:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.845795 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.862786 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422480a045454133a17132666976f8e5a564759ab1bf7668e41ad1663eb4bc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dce8b517d8f914c50b708fd7d66e6e3796768ded1a0bcb0c5f575f124844c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.880089 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.898254 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-s69rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1033d5e6-680c-4193-aade-8c3d801b0e3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h5sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h5sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-s69rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.914052 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719b0731-cabf-4883-bd19-bbe3786b4ac3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb0303ba3fd943ad92e8cffb4d8322537a9115a81f2d714c22eed182bc8a90a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d011633bdece1cc331c96ab10bafee76ec769fdad2e60b09b2224ad3cf655395\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8847f098f36612e1b18e6fa7e9d3ecd32ae6a0aef704d6ed7e06f9115d993bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136df7849a013cb5393a500a40fcbe252deae349ad3c0d1dbc4f7926c01ff528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136df7849a013cb5393a500a40fcbe252deae349ad3c0d1dbc4f7926c01ff528\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.936613 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.941420 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.941473 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.941483 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.941496 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.941505 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:37Z","lastTransitionTime":"2025-11-24T11:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.964787 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6d361cd-fbb3-466d-9026-4c685922072f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:31:36Z\\\",\\\"message\\\":\\\"nshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1124 11:31:36.069562 6717 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-5fgg5 in node crc\\\\nI1124 11:31:36.069564 6717 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1124 11:31:36.069570 6717 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-5fgg5 after 0 failed attempt(s)\\\\nI1124 11:31:36.069572 6717 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1124 11:31:36.069576 6717 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-5fgg5\\\\nF1124 11:31:36.069447 6717 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-i\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:31:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n4hd6_openshift-ovn-kubernetes(c6d361cd-fbb3-466d-9026-4c685922072f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4hd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.981927 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vztqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da80bfe1-36b3-4239-bf6e-a855a490290a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17faecc8b835016ac0c8868de42de9b0990ce6399926e949f319fc4a26a3257b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nz8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vztqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:37 crc kubenswrapper[4789]: I1124 11:31:37.993962 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zthhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74a73ebd6641a79c50641db01a42eaf7842b9700926f302b4f5e938efa5d865f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpwcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zthhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.043480 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.043711 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.043800 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.043928 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.044017 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:38Z","lastTransitionTime":"2025-11-24T11:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.146750 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.146793 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.146808 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.146828 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.146845 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:38Z","lastTransitionTime":"2025-11-24T11:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.168716 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.168785 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:31:38 crc kubenswrapper[4789]: E1124 11:31:38.168839 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:31:38 crc kubenswrapper[4789]: E1124 11:31:38.168904 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.168967 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:31:38 crc kubenswrapper[4789]: E1124 11:31:38.169023 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.169142 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:31:38 crc kubenswrapper[4789]: E1124 11:31:38.169231 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.186792 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.207108 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fcd7ef8bfab3cbd56ad3f1df7b1d8aaf1459411f27649c7cd12dcde866d14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bbbf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.223910 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jz2zx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c88057c-782b-4cc3-8243-828d959f4434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8b2f85ae9f76d8adf40a2018100916e9aace7877f1f10f26a147088cf44898d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmkqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b792d376da032b1887743c253b0109f14b255a30ef15032b261605d07de2f0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmkqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jz2zx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.244640 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422480a045454133a17132666976f8e5a564759ab1bf7668e41ad1663eb4bc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dce8b517d8f914c50b708fd7d66e6e3796768ded1a0bcb0c5f575f124844c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.249802 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.249929 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.250020 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.250113 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.250202 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:38Z","lastTransitionTime":"2025-11-24T11:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.261647 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b941dfb57d7894426efab65a2f2f6a0cbb524c48c0657d493eefe51923f30711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.287619 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5fgg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"776a7cdb-6468-4e8a-8577-3535ff549781\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d61abcc33b471ae4b6dd594629a2287b59f66577b200848232023fa03a32aad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a9c256912e5f9308382925d83cd341ff711fdd9fce20f0c76d22f59033bfbf8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:31:31Z\\\",\\\"message\\\":\\\"2025-11-24T11:30:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d056051d-2323-4561-b0c2-c4c6ba6f431e\\\\n2025-11-24T11:30:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d056051d-2323-4561-b0c2-c4c6ba6f431e to /host/opt/cni/bin/\\\\n2025-11-24T11:30:46Z [verbose] multus-daemon started\\\\n2025-11-24T11:30:46Z [verbose] Readiness Indicator file check\\\\n2025-11-24T11:31:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:31:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ct4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5fgg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.303231 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c4a832-f0e4-481b-a474-3ecea86049f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb40689bf9e2d48e8dbd0827e82dc097464ab71edf0f871edc26ff8ed3508957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9czvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.317504 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad91557a-c8cf-4dcd-b434-48f7cdbf9955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edb7c8772394f7e4e2a72f2f354cf4b45d4e4ec2c5897c415583c26012e4508e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d4c69ca57fd2625092ab049c4cf09c515edaedf5219818d8b86d1405fbf9f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54d4c69ca57fd2625092ab049c4cf09c515edaedf5219818d8b86d1405fbf9f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.336662 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc01f98a19f3885135cee8c8ee980f101ca61c40d316c0296bacfc3218400\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.353172 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.353223 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.353243 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.353269 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.353290 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:38Z","lastTransitionTime":"2025-11-24T11:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.353902 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.367304 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.381304 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.395550 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zthhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74a73ebd6641a79c50641db01a42eaf7842b9700926f302b4f5e938efa5d865f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpwcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zthhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.412112 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-s69rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1033d5e6-680c-4193-aade-8c3d801b0e3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h5sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h5sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-s69rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.427766 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719b0731-cabf-4883-bd19-bbe3786b4ac3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb0303ba3fd943ad92e8cffb4d8322537a9115a81f2d714c22eed182bc8a90a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d011633bdece1cc331c96ab10bafee76ec769fdad2e60b09b2224ad3cf655395\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8847f098f36612e1b18e6fa7e9d3ecd32ae6a0aef704d6ed7e06f9115d993bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136df7849a013cb5393a500a40fcbe252deae349ad3c0d1dbc4f7926c01ff528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136df7849a013cb5393a500a40fcbe252deae349ad3c0d1dbc4f7926c01ff528\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.446550 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.455627 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.455682 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.455699 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.455721 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.455739 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:38Z","lastTransitionTime":"2025-11-24T11:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.471156 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6d361cd-fbb3-466d-9026-4c685922072f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:31:36Z\\\",\\\"message\\\":\\\"nshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1124 11:31:36.069562 6717 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-5fgg5 in node crc\\\\nI1124 11:31:36.069564 6717 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1124 11:31:36.069570 6717 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-5fgg5 after 0 failed attempt(s)\\\\nI1124 11:31:36.069572 6717 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1124 11:31:36.069576 6717 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-5fgg5\\\\nF1124 11:31:36.069447 6717 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-i\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:31:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n4hd6_openshift-ovn-kubernetes(c6d361cd-fbb3-466d-9026-4c685922072f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4hd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.485926 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vztqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da80bfe1-36b3-4239-bf6e-a855a490290a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17faecc8b835016ac0c8868de42de9b0990ce6399926e949f319fc4a26a3257b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nz8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vztqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:38Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.558712 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.559852 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.560120 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.560309 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.560521 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:38Z","lastTransitionTime":"2025-11-24T11:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.663581 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.663625 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.663636 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.663651 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.663661 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:38Z","lastTransitionTime":"2025-11-24T11:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.766054 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.766277 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.766379 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.766475 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.766548 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:38Z","lastTransitionTime":"2025-11-24T11:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.869243 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.869308 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.869325 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.869348 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.869366 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:38Z","lastTransitionTime":"2025-11-24T11:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.971816 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.971882 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.971903 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.971930 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:38 crc kubenswrapper[4789]: I1124 11:31:38.971952 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:38Z","lastTransitionTime":"2025-11-24T11:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.074312 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.074350 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.074359 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.074373 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.074384 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:39Z","lastTransitionTime":"2025-11-24T11:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.177542 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.177602 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.177623 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.177647 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.177666 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:39Z","lastTransitionTime":"2025-11-24T11:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.280071 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.280107 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.280118 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.280135 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.280146 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:39Z","lastTransitionTime":"2025-11-24T11:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.382368 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.382403 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.382413 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.382429 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.382440 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:39Z","lastTransitionTime":"2025-11-24T11:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.484846 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.484916 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.484940 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.484972 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.484996 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:39Z","lastTransitionTime":"2025-11-24T11:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.588272 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.588330 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.588347 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.588373 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.588391 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:39Z","lastTransitionTime":"2025-11-24T11:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.690791 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.690932 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.690960 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.691003 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.691040 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:39Z","lastTransitionTime":"2025-11-24T11:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.793747 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.793800 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.793811 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.793826 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.793834 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:39Z","lastTransitionTime":"2025-11-24T11:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.897561 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.897618 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.897634 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.897659 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:39 crc kubenswrapper[4789]: I1124 11:31:39.897681 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:39Z","lastTransitionTime":"2025-11-24T11:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.000196 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.000256 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.000274 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.000298 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.000314 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:40Z","lastTransitionTime":"2025-11-24T11:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.103640 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.103707 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.103726 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.103750 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.103766 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:40Z","lastTransitionTime":"2025-11-24T11:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.168237 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:31:40 crc kubenswrapper[4789]: E1124 11:31:40.168404 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.168698 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:31:40 crc kubenswrapper[4789]: E1124 11:31:40.168795 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.169051 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:31:40 crc kubenswrapper[4789]: E1124 11:31:40.169144 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.169521 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:31:40 crc kubenswrapper[4789]: E1124 11:31:40.169645 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.206920 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.206970 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.206987 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.207008 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.207024 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:40Z","lastTransitionTime":"2025-11-24T11:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.310217 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.310277 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.310295 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.310319 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.310335 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:40Z","lastTransitionTime":"2025-11-24T11:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.413894 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.413944 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.413955 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.413973 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.413984 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:40Z","lastTransitionTime":"2025-11-24T11:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.516918 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.516952 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.516960 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.516972 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.516982 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:40Z","lastTransitionTime":"2025-11-24T11:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.618899 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.618971 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.618979 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.618991 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.619000 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:40Z","lastTransitionTime":"2025-11-24T11:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.722670 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.722743 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.722760 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.722784 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.722803 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:40Z","lastTransitionTime":"2025-11-24T11:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.825821 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.825965 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.825991 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.826022 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.826045 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:40Z","lastTransitionTime":"2025-11-24T11:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.929211 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.929261 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.929273 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.929290 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:40 crc kubenswrapper[4789]: I1124 11:31:40.929302 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:40Z","lastTransitionTime":"2025-11-24T11:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.032934 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.032993 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.033010 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.033033 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.033051 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:41Z","lastTransitionTime":"2025-11-24T11:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.136840 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.136894 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.136911 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.136938 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.136956 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:41Z","lastTransitionTime":"2025-11-24T11:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.240574 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.240647 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.240674 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.240698 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.240716 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:41Z","lastTransitionTime":"2025-11-24T11:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.344537 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.344631 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.344661 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.344693 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.344721 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:41Z","lastTransitionTime":"2025-11-24T11:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.448543 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.448598 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.448616 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.448638 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.448657 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:41Z","lastTransitionTime":"2025-11-24T11:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.551506 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.551559 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.551578 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.551599 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.551618 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:41Z","lastTransitionTime":"2025-11-24T11:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.662390 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.662449 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.662515 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.662555 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.662578 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:41Z","lastTransitionTime":"2025-11-24T11:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.765591 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.765668 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.765688 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.765714 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.765733 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:41Z","lastTransitionTime":"2025-11-24T11:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.869250 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.869409 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.869435 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.869506 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.869530 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:41Z","lastTransitionTime":"2025-11-24T11:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.972811 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.972857 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.972872 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.972893 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:41 crc kubenswrapper[4789]: I1124 11:31:41.972909 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:41Z","lastTransitionTime":"2025-11-24T11:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.000525 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:31:42 crc kubenswrapper[4789]: E1124 11:31:42.000691 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:46.000663444 +0000 UTC m=+148.583134833 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.076288 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.076351 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.076368 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.076396 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.076438 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:42Z","lastTransitionTime":"2025-11-24T11:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.101925 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.101981 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.102017 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.102044 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:31:42 crc kubenswrapper[4789]: E1124 11:31:42.102215 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:31:42 crc kubenswrapper[4789]: E1124 11:31:42.102237 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:31:42 crc kubenswrapper[4789]: E1124 11:31:42.102251 4789 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:31:42 crc kubenswrapper[4789]: E1124 11:31:42.102255 4789 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:31:42 crc kubenswrapper[4789]: E1124 11:31:42.102301 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 11:32:46.102285261 +0000 UTC m=+148.684756630 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:31:42 crc kubenswrapper[4789]: E1124 11:31:42.102375 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:32:46.102338432 +0000 UTC m=+148.684809871 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:31:42 crc kubenswrapper[4789]: E1124 11:31:42.102393 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:31:42 crc kubenswrapper[4789]: E1124 11:31:42.102430 4789 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:31:42 crc kubenswrapper[4789]: E1124 11:31:42.102453 4789 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:31:42 crc kubenswrapper[4789]: E1124 11:31:42.102573 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 11:32:46.102548088 +0000 UTC m=+148.685019517 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:31:42 crc kubenswrapper[4789]: E1124 11:31:42.102261 4789 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:31:42 crc kubenswrapper[4789]: E1124 11:31:42.102736 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:32:46.102695183 +0000 UTC m=+148.685166602 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.169104 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.169174 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.169239 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.169502 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:31:42 crc kubenswrapper[4789]: E1124 11:31:42.169445 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:31:42 crc kubenswrapper[4789]: E1124 11:31:42.169717 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:31:42 crc kubenswrapper[4789]: E1124 11:31:42.169816 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:31:42 crc kubenswrapper[4789]: E1124 11:31:42.169940 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.178192 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.178268 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.178289 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.178311 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.178327 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:42Z","lastTransitionTime":"2025-11-24T11:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.282023 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.282083 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.282099 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.282127 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.282150 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:42Z","lastTransitionTime":"2025-11-24T11:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.384802 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.384869 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.384888 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.384912 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.384931 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:42Z","lastTransitionTime":"2025-11-24T11:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.487973 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.488037 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.488129 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.488159 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.488176 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:42Z","lastTransitionTime":"2025-11-24T11:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.590410 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.590453 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.590501 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.590521 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.590536 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:42Z","lastTransitionTime":"2025-11-24T11:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.693320 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.693370 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.693381 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.693400 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.693414 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:42Z","lastTransitionTime":"2025-11-24T11:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.796165 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.796400 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.796526 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.796604 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.796677 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:42Z","lastTransitionTime":"2025-11-24T11:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.900257 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.900338 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.900376 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.900395 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:42 crc kubenswrapper[4789]: I1124 11:31:42.900412 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:42Z","lastTransitionTime":"2025-11-24T11:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.004133 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.004219 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.004238 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.004260 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.004277 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:43Z","lastTransitionTime":"2025-11-24T11:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.107404 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.107566 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.107614 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.107639 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.107679 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:43Z","lastTransitionTime":"2025-11-24T11:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.209927 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.209971 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.209986 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.210005 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.210020 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:43Z","lastTransitionTime":"2025-11-24T11:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.313193 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.313260 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.313281 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.313310 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.313333 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:43Z","lastTransitionTime":"2025-11-24T11:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.416174 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.416232 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.416254 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.416280 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.416302 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:43Z","lastTransitionTime":"2025-11-24T11:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.519253 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.519334 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.519359 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.519387 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.519408 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:43Z","lastTransitionTime":"2025-11-24T11:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.622681 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.622753 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.622776 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.622803 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.622826 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:43Z","lastTransitionTime":"2025-11-24T11:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.726451 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.726552 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.726570 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.726598 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.726615 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:43Z","lastTransitionTime":"2025-11-24T11:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.829141 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.829235 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.829254 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.829277 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.829355 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:43Z","lastTransitionTime":"2025-11-24T11:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.932227 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.932292 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.932314 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.932341 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:43 crc kubenswrapper[4789]: I1124 11:31:43.932359 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:43Z","lastTransitionTime":"2025-11-24T11:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.035002 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.035411 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.035650 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.035819 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.035940 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:44Z","lastTransitionTime":"2025-11-24T11:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.139190 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.139252 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.139265 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.139282 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.139295 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:44Z","lastTransitionTime":"2025-11-24T11:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.168761 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.168765 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.169174 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:31:44 crc kubenswrapper[4789]: E1124 11:31:44.169307 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.169343 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:31:44 crc kubenswrapper[4789]: E1124 11:31:44.169452 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:31:44 crc kubenswrapper[4789]: E1124 11:31:44.169582 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:31:44 crc kubenswrapper[4789]: E1124 11:31:44.169675 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.241467 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.241505 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.241514 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.241531 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.241541 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:44Z","lastTransitionTime":"2025-11-24T11:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.344792 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.344860 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.344879 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.344943 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.344965 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:44Z","lastTransitionTime":"2025-11-24T11:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.448165 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.448393 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.448536 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.448633 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.448732 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:44Z","lastTransitionTime":"2025-11-24T11:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.552008 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.552376 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.552626 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.552990 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.553214 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:44Z","lastTransitionTime":"2025-11-24T11:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.656331 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.656385 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.656402 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.656425 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.656442 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:44Z","lastTransitionTime":"2025-11-24T11:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.758640 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.759266 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.759358 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.759488 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.759578 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:44Z","lastTransitionTime":"2025-11-24T11:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.862370 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.862439 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.862504 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.862537 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.862562 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:44Z","lastTransitionTime":"2025-11-24T11:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.965917 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.965971 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.965991 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.966016 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:44 crc kubenswrapper[4789]: I1124 11:31:44.966036 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:44Z","lastTransitionTime":"2025-11-24T11:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.068665 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.068763 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.068782 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.068807 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.068825 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:45Z","lastTransitionTime":"2025-11-24T11:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.171478 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.171528 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.171544 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.171563 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.171578 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:45Z","lastTransitionTime":"2025-11-24T11:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.274841 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.275107 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.275195 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.275285 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.275501 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:45Z","lastTransitionTime":"2025-11-24T11:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.378292 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.378343 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.378359 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.378383 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.378399 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:45Z","lastTransitionTime":"2025-11-24T11:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.481070 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.481149 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.481171 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.481195 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.481211 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:45Z","lastTransitionTime":"2025-11-24T11:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.583856 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.583916 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.583934 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.583958 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.583976 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:45Z","lastTransitionTime":"2025-11-24T11:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.642740 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.642796 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.642807 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.642823 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.642833 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:45Z","lastTransitionTime":"2025-11-24T11:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:45 crc kubenswrapper[4789]: E1124 11:31:45.661062 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4376b485-9285-482b-9f4e-acdea532ff82\\\",\\\"systemUUID\\\":\\\"48941845-60e3-4de0-ba49-51eec51285bb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.664375 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.664412 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.664423 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.664437 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.664449 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:45Z","lastTransitionTime":"2025-11-24T11:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:45 crc kubenswrapper[4789]: E1124 11:31:45.683337 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4376b485-9285-482b-9f4e-acdea532ff82\\\",\\\"systemUUID\\\":\\\"48941845-60e3-4de0-ba49-51eec51285bb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.688498 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.688534 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.688543 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.688556 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.688565 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:45Z","lastTransitionTime":"2025-11-24T11:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:45 crc kubenswrapper[4789]: E1124 11:31:45.711545 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4376b485-9285-482b-9f4e-acdea532ff82\\\",\\\"systemUUID\\\":\\\"48941845-60e3-4de0-ba49-51eec51285bb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.716703 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.716795 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.716814 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.716870 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.716887 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:45Z","lastTransitionTime":"2025-11-24T11:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:45 crc kubenswrapper[4789]: E1124 11:31:45.736657 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4376b485-9285-482b-9f4e-acdea532ff82\\\",\\\"systemUUID\\\":\\\"48941845-60e3-4de0-ba49-51eec51285bb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.741565 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.741664 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.741686 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.741713 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.741767 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:45Z","lastTransitionTime":"2025-11-24T11:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:45 crc kubenswrapper[4789]: E1124 11:31:45.761609 4789 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:31:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4376b485-9285-482b-9f4e-acdea532ff82\\\",\\\"systemUUID\\\":\\\"48941845-60e3-4de0-ba49-51eec51285bb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:45Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:45 crc kubenswrapper[4789]: E1124 11:31:45.761933 4789 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.764041 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.764153 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.764185 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.764208 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.764259 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:45Z","lastTransitionTime":"2025-11-24T11:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.867898 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.868204 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.868323 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.868493 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.868633 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:45Z","lastTransitionTime":"2025-11-24T11:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.973343 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.973416 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.973437 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.973509 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:45 crc kubenswrapper[4789]: I1124 11:31:45.973534 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:45Z","lastTransitionTime":"2025-11-24T11:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.076367 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.076422 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.076440 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.076481 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.076496 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:46Z","lastTransitionTime":"2025-11-24T11:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.169182 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.169228 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.169276 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:31:46 crc kubenswrapper[4789]: E1124 11:31:46.169322 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.169244 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:31:46 crc kubenswrapper[4789]: E1124 11:31:46.169429 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:31:46 crc kubenswrapper[4789]: E1124 11:31:46.169643 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:31:46 crc kubenswrapper[4789]: E1124 11:31:46.170316 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.179279 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.179314 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.179326 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.179341 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.179388 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:46Z","lastTransitionTime":"2025-11-24T11:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.282645 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.282701 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.282717 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.282741 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.282762 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:46Z","lastTransitionTime":"2025-11-24T11:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.386563 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.386640 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.386665 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.386698 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.386721 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:46Z","lastTransitionTime":"2025-11-24T11:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.489717 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.489759 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.489770 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.489786 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.489798 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:46Z","lastTransitionTime":"2025-11-24T11:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.592604 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.592924 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.593057 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.593187 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.593322 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:46Z","lastTransitionTime":"2025-11-24T11:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.697282 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.697354 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.697381 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.697410 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.697432 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:46Z","lastTransitionTime":"2025-11-24T11:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.801128 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.802039 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.802240 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.802396 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.802570 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:46Z","lastTransitionTime":"2025-11-24T11:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.905329 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.905387 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.905398 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.905412 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:46 crc kubenswrapper[4789]: I1124 11:31:46.905422 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:46Z","lastTransitionTime":"2025-11-24T11:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.008601 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.008668 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.008687 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.008711 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.008728 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:47Z","lastTransitionTime":"2025-11-24T11:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.111325 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.111370 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.111405 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.111431 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.111443 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:47Z","lastTransitionTime":"2025-11-24T11:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.215089 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.215158 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.215174 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.215192 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.215205 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:47Z","lastTransitionTime":"2025-11-24T11:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.318119 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.318525 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.318736 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.318942 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.319149 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:47Z","lastTransitionTime":"2025-11-24T11:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.421723 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.421789 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.421809 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.421834 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.421850 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:47Z","lastTransitionTime":"2025-11-24T11:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.525009 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.525076 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.525093 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.525116 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.525133 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:47Z","lastTransitionTime":"2025-11-24T11:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.627675 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.627739 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.627761 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.627797 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.627822 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:47Z","lastTransitionTime":"2025-11-24T11:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.731526 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.731590 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.731608 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.731630 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.731648 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:47Z","lastTransitionTime":"2025-11-24T11:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.834684 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.834766 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.834785 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.834809 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.834827 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:47Z","lastTransitionTime":"2025-11-24T11:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.937772 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.937812 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.937823 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.937837 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:47 crc kubenswrapper[4789]: I1124 11:31:47.937848 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:47Z","lastTransitionTime":"2025-11-24T11:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.040990 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.041041 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.041058 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.041080 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.041097 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:48Z","lastTransitionTime":"2025-11-24T11:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.143839 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.143913 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.143943 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.143970 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.143993 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:48Z","lastTransitionTime":"2025-11-24T11:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.168620 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:31:48 crc kubenswrapper[4789]: E1124 11:31:48.168711 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.168745 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:31:48 crc kubenswrapper[4789]: E1124 11:31:48.168865 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.168946 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:31:48 crc kubenswrapper[4789]: E1124 11:31:48.169000 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.169886 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:31:48 crc kubenswrapper[4789]: E1124 11:31:48.170320 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.188875 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.205377 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"719b0731-cabf-4883-bd19-bbe3786b4ac3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb0303ba3fd943ad92e8cffb4d8322537a9115a81f2d714c22eed182bc8a90a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d011633bdece1cc331c96ab10bafee76ec769fdad2e60b09b2224ad3cf655395\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8847f098f36612e1b18e6fa7e9d3ecd32ae6a0aef704d6ed7e06f9115d993bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136df7849a013cb5393a500a40fcbe252deae349ad3c0d1dbc4f7926c01ff528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136df7849a013cb5393a500a40fcbe252deae349ad3c0d1dbc4f7926c01ff528\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.216968 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.246085 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.246131 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.246148 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.246169 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.246185 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:48Z","lastTransitionTime":"2025-11-24T11:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.247847 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6d361cd-fbb3-466d-9026-4c685922072f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:31:36Z\\\",\\\"message\\\":\\\"nshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1124 11:31:36.069562 6717 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-5fgg5 in node crc\\\\nI1124 11:31:36.069564 6717 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1124 11:31:36.069570 6717 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-5fgg5 after 0 failed attempt(s)\\\\nI1124 11:31:36.069572 6717 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1124 11:31:36.069576 6717 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-5fgg5\\\\nF1124 11:31:36.069447 6717 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-i\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:31:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n4hd6_openshift-ovn-kubernetes(c6d361cd-fbb3-466d-9026-4c685922072f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f7tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n4hd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.263780 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vztqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da80bfe1-36b3-4239-bf6e-a855a490290a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17faecc8b835016ac0c8868de42de9b0990ce6399926e949f319fc4a26a3257b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nz8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vztqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.282693 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zthhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5c4f42-e991-449b-aa93-2dea9d61dbc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74a73ebd6641a79c50641db01a42eaf7842b9700926f302b4f5e938efa5d865f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpwcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zthhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.296861 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-s69rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1033d5e6-680c-4193-aade-8c3d801b0e3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h5sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h5sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-s69rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.318647 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d47af2f513180b03f52afdbda0d47ec20947956786b594583a3b3082764a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.342527 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eb8871-21cb-4fb0-92a4-02d4224ff2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fcd7ef8bfab3cbd56ad3f1df7b1d8aaf1459411f27649c7cd12dcde866d14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b82c21bbbdb78ad9d42039eb758eaf7435fc084c304538509262266c231b9ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902248bc14508bb37ad3fb249f74df4f9decb8aa63719ed834122e69b54e91c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da86de4c3c1950341ad56d25985dbb6b986aee2260445651768aeff6cef730ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5508a1750ce86c9edba495a49b90290f71d952c2026f4106f17b919460ff858\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50fc0cfac86ea72e9e49e86f579fea44b7637f47952fa22697b1d733bb9cb12f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce73b3dc8fd30aa55926c4cf1f3a5e7f0b68a238a2dc6b97031ccf2d3a16f03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bbbf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.348546 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.348583 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.348592 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.348608 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.348618 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:48Z","lastTransitionTime":"2025-11-24T11:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.356333 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jz2zx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c88057c-782b-4cc3-8243-828d959f4434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8b2f85ae9f76d8adf40a2018100916e9aace7877f1f10f26a147088cf44898d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmkqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b792d376da032b1887743c253b0109f14b255a30ef15032b261605d07de2f0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmkqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jz2zx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.373946 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5fgg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"776a7cdb-6468-4e8a-8577-3535ff549781\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d61abcc33b471ae4b6dd594629a2287b59f66577b200848232023fa03a32aad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a9c256912e5f9308382925d83cd341ff711fdd9fce20f0c76d22f59033bfbf8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:31:31Z\\\",\\\"message\\\":\\\"2025-11-24T11:30:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d056051d-2323-4561-b0c2-c4c6ba6f431e\\\\n2025-11-24T11:30:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d056051d-2323-4561-b0c2-c4c6ba6f431e to /host/opt/cni/bin/\\\\n2025-11-24T11:30:46Z [verbose] multus-daemon started\\\\n2025-11-24T11:30:46Z [verbose] Readiness Indicator file check\\\\n2025-11-24T11:31:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:45Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:31:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ct4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5fgg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.388354 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c4a832-f0e4-481b-a474-3ecea86049f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb40689bf9e2d48e8dbd0827e82dc097464ab71edf0f871edc26ff8ed3508957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q72sq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9czvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.405525 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad91557a-c8cf-4dcd-b434-48f7cdbf9955\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edb7c8772394f7e4e2a72f2f354cf4b45d4e4ec2c5897c415583c26012e4508e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d4c69ca57fd2625092ab049c4cf09c515edaedf5219818d8b86d1405fbf9f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54d4c69ca57fd2625092ab049c4cf09c515edaedf5219818d8b86d1405fbf9f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.421231 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aeb14bf-aa9c-4edf-bef0-2e921ba629dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:31:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb4fdc83e45c885da432e3ddf529585235251054d4e07375cb687db8036452c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4fe650065a79f9a2771fb9553393965448e8fe5ca7f1afb32da888aa4753fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4274f4121ee23152751aa70e02bd3b1a535d0cbc8ee1982e48877ea125e6e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc01f98a19f3885135cee8c8ee980f101ca61c40d316c0296bacfc3218400\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d1aa39fced7797bd6e3d5d4a19962fcd0de70a0ea2bc385fd8e97410836004\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 11:30:37.767675 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 11:30:37.767888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:30:37.768654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1972186645/tls.crt::/tmp/serving-cert-1972186645/tls.key\\\\\\\"\\\\nI1124 11:30:38.130111 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:30:38.141185 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:30:38.141217 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:30:38.141239 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:30:38.141246 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:30:38.147443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:30:38.147499 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:30:38.147510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:30:38.147513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:30:38.147515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:30:38.147519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:30:38.147618 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:30:38.154052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://904bf93b4658be52e6c1dfb01ce41c45b345842521bb46671c6dcd20d7ecfd57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecbf64e3da26d32778fd5f04784f4490abcdabf56cf5d08129f024d24408a054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.435777 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5292f7bb-af17-47e9-94ae-f055f9e27927\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651c6fd4e1c1a453ca8125682145ba0eb222e12254b54447825919945af2ad11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c6066004c9ad3296d51eae14270f2c19c1cb432b0b84c26e43fe011dd56d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9f2eb41d9aa167a42524b8c7570942988cb4298f50931b07ecd38b32f6a983\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5ebc7c3c77d5618ef9bf4bcf4f25c0fe00f68485e9a1e080c11599590a8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:30:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.451842 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.452093 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.452184 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.452273 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.451829 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.452358 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:48Z","lastTransitionTime":"2025-11-24T11:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.464658 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422480a045454133a17132666976f8e5a564759ab1bf7668e41ad1663eb4bc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dce8b517d8f914c50b708fd7d66e6e3796768ded1a0bcb0c5f575f124844c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.474991 4789 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:30:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b941dfb57d7894426efab65a2f2f6a0cbb524c48c0657d493eefe51923f30711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:31:48Z is after 2025-08-24T17:21:41Z" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.555804 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.555874 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.555946 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.555982 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.556008 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:48Z","lastTransitionTime":"2025-11-24T11:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.663501 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.663581 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.663605 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.663635 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.663656 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:48Z","lastTransitionTime":"2025-11-24T11:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.766880 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.766917 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.766928 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.766945 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.766956 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:48Z","lastTransitionTime":"2025-11-24T11:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.870006 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.870075 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.870094 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.870117 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.870135 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:48Z","lastTransitionTime":"2025-11-24T11:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.973891 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.973956 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.973973 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.973997 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:48 crc kubenswrapper[4789]: I1124 11:31:48.974019 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:48Z","lastTransitionTime":"2025-11-24T11:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.077539 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.077590 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.077605 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.077624 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.077635 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:49Z","lastTransitionTime":"2025-11-24T11:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.180305 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.180342 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.180352 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.180368 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.180381 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:49Z","lastTransitionTime":"2025-11-24T11:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.282540 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.282806 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.282897 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.282981 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.283064 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:49Z","lastTransitionTime":"2025-11-24T11:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.386262 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.386293 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.386302 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.386315 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.386325 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:49Z","lastTransitionTime":"2025-11-24T11:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.488769 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.488970 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.489002 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.489094 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.489190 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:49Z","lastTransitionTime":"2025-11-24T11:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.591116 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.591155 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.591166 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.591182 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.591193 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:49Z","lastTransitionTime":"2025-11-24T11:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.693600 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.693667 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.693689 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.693718 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.693739 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:49Z","lastTransitionTime":"2025-11-24T11:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.796810 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.796877 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.796899 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.796929 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.796955 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:49Z","lastTransitionTime":"2025-11-24T11:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.900321 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.900388 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.900404 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.900426 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:49 crc kubenswrapper[4789]: I1124 11:31:49.900442 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:49Z","lastTransitionTime":"2025-11-24T11:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.003286 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.003335 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.003347 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.003362 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.003376 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:50Z","lastTransitionTime":"2025-11-24T11:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.106270 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.106530 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.106575 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.106594 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.106605 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:50Z","lastTransitionTime":"2025-11-24T11:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.168715 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.168753 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.168842 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:31:50 crc kubenswrapper[4789]: E1124 11:31:50.168980 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.169063 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:31:50 crc kubenswrapper[4789]: E1124 11:31:50.169213 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:31:50 crc kubenswrapper[4789]: E1124 11:31:50.169247 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:31:50 crc kubenswrapper[4789]: E1124 11:31:50.169410 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.209026 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.209300 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.209388 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.209495 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.209585 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:50Z","lastTransitionTime":"2025-11-24T11:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.312357 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.312653 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.312779 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.312886 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.312968 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:50Z","lastTransitionTime":"2025-11-24T11:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.415368 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.415769 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.415969 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.416196 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.416350 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:50Z","lastTransitionTime":"2025-11-24T11:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.519681 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.519713 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.519721 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.519733 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.519742 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:50Z","lastTransitionTime":"2025-11-24T11:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.622058 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.622096 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.622108 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.622126 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.622137 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:50Z","lastTransitionTime":"2025-11-24T11:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.724698 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.724775 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.724802 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.724829 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.724848 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:50Z","lastTransitionTime":"2025-11-24T11:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.828072 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.828146 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.828169 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.828202 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.828223 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:50Z","lastTransitionTime":"2025-11-24T11:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.931183 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.931251 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.931274 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.931302 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:50 crc kubenswrapper[4789]: I1124 11:31:50.931323 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:50Z","lastTransitionTime":"2025-11-24T11:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.033922 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.033965 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.033974 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.033987 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.033996 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:51Z","lastTransitionTime":"2025-11-24T11:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.136199 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.136250 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.136261 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.136280 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.136292 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:51Z","lastTransitionTime":"2025-11-24T11:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.239014 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.239302 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.239376 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.239443 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.239525 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:51Z","lastTransitionTime":"2025-11-24T11:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.343257 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.343326 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.343344 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.343369 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.343387 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:51Z","lastTransitionTime":"2025-11-24T11:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.446017 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.446285 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.446367 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.446444 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.446539 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:51Z","lastTransitionTime":"2025-11-24T11:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.549520 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.549577 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.549591 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.549610 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.549622 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:51Z","lastTransitionTime":"2025-11-24T11:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.652566 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.652599 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.652608 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.652620 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.652628 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:51Z","lastTransitionTime":"2025-11-24T11:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.754261 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.754294 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.754304 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.754318 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.754328 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:51Z","lastTransitionTime":"2025-11-24T11:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.857246 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.857293 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.857310 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.857333 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.857349 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:51Z","lastTransitionTime":"2025-11-24T11:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.959874 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.960536 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.960561 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.960578 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:51 crc kubenswrapper[4789]: I1124 11:31:51.960590 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:51Z","lastTransitionTime":"2025-11-24T11:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.063718 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.063895 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.063933 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.063964 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.063987 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:52Z","lastTransitionTime":"2025-11-24T11:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.166730 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.166785 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.166802 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.166826 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.166843 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:52Z","lastTransitionTime":"2025-11-24T11:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.168691 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.168728 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.168786 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:31:52 crc kubenswrapper[4789]: E1124 11:31:52.168981 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.169048 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:31:52 crc kubenswrapper[4789]: E1124 11:31:52.169187 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:31:52 crc kubenswrapper[4789]: E1124 11:31:52.169270 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:31:52 crc kubenswrapper[4789]: E1124 11:31:52.169864 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.170228 4789 scope.go:117] "RemoveContainer" containerID="ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7" Nov 24 11:31:52 crc kubenswrapper[4789]: E1124 11:31:52.170489 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n4hd6_openshift-ovn-kubernetes(c6d361cd-fbb3-466d-9026-4c685922072f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.270138 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.270198 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.270217 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.270238 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.270255 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:52Z","lastTransitionTime":"2025-11-24T11:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.374112 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.374174 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.374191 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.374212 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.374230 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:52Z","lastTransitionTime":"2025-11-24T11:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.476849 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.476910 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.476931 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.476953 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.476971 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:52Z","lastTransitionTime":"2025-11-24T11:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.580090 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.580212 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.580312 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.580348 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.580372 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:52Z","lastTransitionTime":"2025-11-24T11:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.684552 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.684641 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.684659 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.684681 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.684697 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:52Z","lastTransitionTime":"2025-11-24T11:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.787708 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.787766 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.787787 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.787811 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.787828 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:52Z","lastTransitionTime":"2025-11-24T11:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.891709 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.892013 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.892216 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.892404 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.892637 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:52Z","lastTransitionTime":"2025-11-24T11:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.995943 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.996024 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.996047 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.996078 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:52 crc kubenswrapper[4789]: I1124 11:31:52.996102 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:52Z","lastTransitionTime":"2025-11-24T11:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.098483 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.098532 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.098544 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.098561 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.098571 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:53Z","lastTransitionTime":"2025-11-24T11:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.201157 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.201217 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.201230 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.201252 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.201266 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:53Z","lastTransitionTime":"2025-11-24T11:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.303354 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.303414 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.303429 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.303450 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.303484 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:53Z","lastTransitionTime":"2025-11-24T11:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.405403 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.405478 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.405492 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.405508 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.405519 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:53Z","lastTransitionTime":"2025-11-24T11:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.511937 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.511973 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.511982 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.511997 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.512007 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:53Z","lastTransitionTime":"2025-11-24T11:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.613991 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.614300 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.614445 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.614639 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.614843 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:53Z","lastTransitionTime":"2025-11-24T11:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.718327 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.718415 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.718434 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.718494 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.718513 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:53Z","lastTransitionTime":"2025-11-24T11:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.821748 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.821814 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.821836 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.821865 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.821926 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:53Z","lastTransitionTime":"2025-11-24T11:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.924805 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.924840 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.924852 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.924867 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:53 crc kubenswrapper[4789]: I1124 11:31:53.924879 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:53Z","lastTransitionTime":"2025-11-24T11:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.027522 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.027547 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.027565 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.027581 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.027592 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:54Z","lastTransitionTime":"2025-11-24T11:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.129501 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.129566 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.129586 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.129611 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.129630 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:54Z","lastTransitionTime":"2025-11-24T11:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.168729 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.168785 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.168919 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:31:54 crc kubenswrapper[4789]: E1124 11:31:54.168915 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:31:54 crc kubenswrapper[4789]: E1124 11:31:54.169088 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:31:54 crc kubenswrapper[4789]: E1124 11:31:54.169351 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.170046 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:31:54 crc kubenswrapper[4789]: E1124 11:31:54.170188 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.232758 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.232807 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.232820 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.232842 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.232859 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:54Z","lastTransitionTime":"2025-11-24T11:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.335316 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.335358 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.335370 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.335385 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.335396 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:54Z","lastTransitionTime":"2025-11-24T11:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.437773 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.437834 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.437853 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.437879 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.437896 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:54Z","lastTransitionTime":"2025-11-24T11:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.540063 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.540407 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.540655 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.540872 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.541067 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:54Z","lastTransitionTime":"2025-11-24T11:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.644357 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.644410 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.644427 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.644449 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.644489 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:54Z","lastTransitionTime":"2025-11-24T11:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.747665 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.747717 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.747726 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.747740 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.747750 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:54Z","lastTransitionTime":"2025-11-24T11:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.850774 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.850831 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.850842 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.850858 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.850868 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:54Z","lastTransitionTime":"2025-11-24T11:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.953524 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.953589 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.953606 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.953633 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:54 crc kubenswrapper[4789]: I1124 11:31:54.953654 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:54Z","lastTransitionTime":"2025-11-24T11:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.055894 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.055958 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.055980 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.056007 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.056031 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:55Z","lastTransitionTime":"2025-11-24T11:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.158951 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.159232 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.159631 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.160015 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.160207 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:55Z","lastTransitionTime":"2025-11-24T11:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.263315 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.263724 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.264040 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.264372 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.264770 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:55Z","lastTransitionTime":"2025-11-24T11:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.367601 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.367630 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.367640 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.367652 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.367660 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:55Z","lastTransitionTime":"2025-11-24T11:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.469491 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.469704 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.469816 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.469913 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.469997 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:55Z","lastTransitionTime":"2025-11-24T11:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.572375 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.572426 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.572437 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.572468 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.572477 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:55Z","lastTransitionTime":"2025-11-24T11:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.674791 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.675127 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.675248 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.675382 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.675614 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:55Z","lastTransitionTime":"2025-11-24T11:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.777935 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.778324 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.778431 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.778564 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.778668 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:55Z","lastTransitionTime":"2025-11-24T11:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.856908 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.856990 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.857014 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.857039 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:31:55 crc kubenswrapper[4789]: I1124 11:31:55.857056 4789 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:31:55Z","lastTransitionTime":"2025-11-24T11:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:31:56 crc kubenswrapper[4789]: I1124 11:31:56.169780 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:31:56 crc kubenswrapper[4789]: E1124 11:31:56.169908 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:31:56 crc kubenswrapper[4789]: I1124 11:31:56.169974 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:31:56 crc kubenswrapper[4789]: E1124 11:31:56.170030 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:31:56 crc kubenswrapper[4789]: I1124 11:31:56.170291 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:31:56 crc kubenswrapper[4789]: I1124 11:31:56.170322 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:31:56 crc kubenswrapper[4789]: E1124 11:31:56.170402 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:31:56 crc kubenswrapper[4789]: E1124 11:31:56.170491 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:31:56 crc kubenswrapper[4789]: I1124 11:31:56.183810 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-5d7c6"] Nov 24 11:31:56 crc kubenswrapper[4789]: I1124 11:31:56.184217 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5d7c6" Nov 24 11:31:56 crc kubenswrapper[4789]: I1124 11:31:56.186770 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 24 11:31:56 crc kubenswrapper[4789]: I1124 11:31:56.186857 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 24 11:31:56 crc kubenswrapper[4789]: I1124 11:31:56.186936 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 24 11:31:56 crc kubenswrapper[4789]: I1124 11:31:56.189515 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 24 11:31:56 crc kubenswrapper[4789]: I1124 11:31:56.220403 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=78.220377773 podStartE2EDuration="1m18.220377773s" podCreationTimestamp="2025-11-24 11:30:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:31:56.20619126 +0000 UTC m=+98.788662639" watchObservedRunningTime="2025-11-24 11:31:56.220377773 +0000 UTC m=+98.802849172" Nov 24 11:31:56 crc kubenswrapper[4789]: I1124 11:31:56.236036 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=79.236019515 podStartE2EDuration="1m19.236019515s" podCreationTimestamp="2025-11-24 11:30:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:31:56.222046859 +0000 UTC m=+98.804518228" watchObservedRunningTime="2025-11-24 11:31:56.236019515 +0000 UTC m=+98.818490894" Nov 24 11:31:56 crc kubenswrapper[4789]: I1124 11:31:56.279541 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-5fgg5" podStartSLOduration=73.279525507 podStartE2EDuration="1m13.279525507s" podCreationTimestamp="2025-11-24 11:30:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:31:56.279030204 +0000 UTC m=+98.861501603" watchObservedRunningTime="2025-11-24 11:31:56.279525507 +0000 UTC m=+98.861996896" Nov 24 11:31:56 crc kubenswrapper[4789]: I1124 11:31:56.328255 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podStartSLOduration=73.328234973 podStartE2EDuration="1m13.328234973s" podCreationTimestamp="2025-11-24 11:30:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:31:56.302591234 +0000 UTC m=+98.885062653" watchObservedRunningTime="2025-11-24 11:31:56.328234973 +0000 UTC m=+98.910706352" Nov 24 11:31:56 crc kubenswrapper[4789]: I1124 11:31:56.341963 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=27.341945241 podStartE2EDuration="27.341945241s" podCreationTimestamp="2025-11-24 11:31:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:31:56.327987255 +0000 UTC m=+98.910458634" watchObservedRunningTime="2025-11-24 11:31:56.341945241 +0000 UTC m=+98.924416620" Nov 24 11:31:56 crc kubenswrapper[4789]: I1124 11:31:56.359449 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7db326d1-701b-404c-8339-edec52fa45bd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5d7c6\" (UID: \"7db326d1-701b-404c-8339-edec52fa45bd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5d7c6" Nov 24 11:31:56 crc kubenswrapper[4789]: I1124 11:31:56.359758 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7db326d1-701b-404c-8339-edec52fa45bd-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5d7c6\" (UID: \"7db326d1-701b-404c-8339-edec52fa45bd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5d7c6" Nov 24 11:31:56 crc kubenswrapper[4789]: I1124 11:31:56.359890 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7db326d1-701b-404c-8339-edec52fa45bd-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5d7c6\" (UID: \"7db326d1-701b-404c-8339-edec52fa45bd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5d7c6" Nov 24 11:31:56 crc kubenswrapper[4789]: I1124 11:31:56.359999 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7db326d1-701b-404c-8339-edec52fa45bd-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5d7c6\" (UID: \"7db326d1-701b-404c-8339-edec52fa45bd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5d7c6" Nov 24 11:31:56 crc kubenswrapper[4789]: I1124 11:31:56.360137 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7db326d1-701b-404c-8339-edec52fa45bd-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5d7c6\" (UID: \"7db326d1-701b-404c-8339-edec52fa45bd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5d7c6" Nov 24 11:31:56 crc kubenswrapper[4789]: I1124 11:31:56.401007 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-vztqv" podStartSLOduration=73.400989963 podStartE2EDuration="1m13.400989963s" podCreationTimestamp="2025-11-24 11:30:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:31:56.400763096 +0000 UTC m=+98.983234485" watchObservedRunningTime="2025-11-24 11:31:56.400989963 +0000 UTC m=+98.983461352" Nov 24 11:31:56 crc kubenswrapper[4789]: I1124 11:31:56.412441 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-zthhc" podStartSLOduration=73.412425609 podStartE2EDuration="1m13.412425609s" podCreationTimestamp="2025-11-24 11:30:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:31:56.412172472 +0000 UTC m=+98.994643851" watchObservedRunningTime="2025-11-24 11:31:56.412425609 +0000 UTC m=+98.994896998" Nov 24 11:31:56 crc kubenswrapper[4789]: I1124 11:31:56.447245 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=43.44722782 podStartE2EDuration="43.44722782s" podCreationTimestamp="2025-11-24 11:31:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:31:56.433572133 +0000 UTC m=+99.016043512" watchObservedRunningTime="2025-11-24 11:31:56.44722782 +0000 UTC m=+99.029699209" Nov 24 11:31:56 crc kubenswrapper[4789]: I1124 11:31:56.465033 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7db326d1-701b-404c-8339-edec52fa45bd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5d7c6\" (UID: \"7db326d1-701b-404c-8339-edec52fa45bd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5d7c6" Nov 24 11:31:56 crc kubenswrapper[4789]: I1124 11:31:56.465080 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7db326d1-701b-404c-8339-edec52fa45bd-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5d7c6\" (UID: \"7db326d1-701b-404c-8339-edec52fa45bd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5d7c6" Nov 24 11:31:56 crc kubenswrapper[4789]: I1124 11:31:56.465106 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7db326d1-701b-404c-8339-edec52fa45bd-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5d7c6\" (UID: \"7db326d1-701b-404c-8339-edec52fa45bd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5d7c6" Nov 24 11:31:56 crc kubenswrapper[4789]: I1124 11:31:56.465125 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7db326d1-701b-404c-8339-edec52fa45bd-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5d7c6\" (UID: \"7db326d1-701b-404c-8339-edec52fa45bd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5d7c6" Nov 24 11:31:56 crc kubenswrapper[4789]: I1124 11:31:56.465163 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7db326d1-701b-404c-8339-edec52fa45bd-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5d7c6\" (UID: \"7db326d1-701b-404c-8339-edec52fa45bd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5d7c6" Nov 24 11:31:56 crc kubenswrapper[4789]: I1124 11:31:56.465495 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7db326d1-701b-404c-8339-edec52fa45bd-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5d7c6\" (UID: \"7db326d1-701b-404c-8339-edec52fa45bd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5d7c6" Nov 24 11:31:56 crc kubenswrapper[4789]: I1124 11:31:56.465515 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7db326d1-701b-404c-8339-edec52fa45bd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5d7c6\" (UID: \"7db326d1-701b-404c-8339-edec52fa45bd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5d7c6" Nov 24 11:31:56 crc kubenswrapper[4789]: I1124 11:31:56.466765 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7db326d1-701b-404c-8339-edec52fa45bd-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5d7c6\" (UID: \"7db326d1-701b-404c-8339-edec52fa45bd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5d7c6" Nov 24 11:31:56 crc kubenswrapper[4789]: I1124 11:31:56.470585 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7db326d1-701b-404c-8339-edec52fa45bd-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5d7c6\" (UID: \"7db326d1-701b-404c-8339-edec52fa45bd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5d7c6" Nov 24 11:31:56 crc kubenswrapper[4789]: I1124 11:31:56.479146 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-bbbf7" podStartSLOduration=73.479125721 podStartE2EDuration="1m13.479125721s" podCreationTimestamp="2025-11-24 11:30:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:31:56.477785184 +0000 UTC m=+99.060256563" watchObservedRunningTime="2025-11-24 11:31:56.479125721 +0000 UTC m=+99.061597100" Nov 24 11:31:56 crc kubenswrapper[4789]: I1124 11:31:56.489439 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jz2zx" podStartSLOduration=72.489425396 podStartE2EDuration="1m12.489425396s" podCreationTimestamp="2025-11-24 11:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:31:56.488652684 +0000 UTC m=+99.071124063" watchObservedRunningTime="2025-11-24 11:31:56.489425396 +0000 UTC m=+99.071896775" Nov 24 11:31:56 crc kubenswrapper[4789]: I1124 11:31:56.490262 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7db326d1-701b-404c-8339-edec52fa45bd-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5d7c6\" (UID: \"7db326d1-701b-404c-8339-edec52fa45bd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5d7c6" Nov 24 11:31:56 crc kubenswrapper[4789]: I1124 11:31:56.500057 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5d7c6" Nov 24 11:31:56 crc kubenswrapper[4789]: I1124 11:31:56.720824 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5d7c6" event={"ID":"7db326d1-701b-404c-8339-edec52fa45bd","Type":"ContainerStarted","Data":"121897e0d3f2c24bd6338bf7c91a4504d9ef2f6226f2c4fb74fe5e8ba6561f26"} Nov 24 11:31:56 crc kubenswrapper[4789]: I1124 11:31:56.721410 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5d7c6" event={"ID":"7db326d1-701b-404c-8339-edec52fa45bd","Type":"ContainerStarted","Data":"2f8f77042b7f403983241a59de125a07084ebded02d4ef7c2b32ae57ddc1a6e6"} Nov 24 11:31:56 crc kubenswrapper[4789]: I1124 11:31:56.736290 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5d7c6" podStartSLOduration=73.736268955 podStartE2EDuration="1m13.736268955s" podCreationTimestamp="2025-11-24 11:30:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:31:56.735388661 +0000 UTC m=+99.317860050" watchObservedRunningTime="2025-11-24 11:31:56.736268955 +0000 UTC m=+99.318740334" Nov 24 11:31:58 crc kubenswrapper[4789]: I1124 11:31:58.168986 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:31:58 crc kubenswrapper[4789]: I1124 11:31:58.169179 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:31:58 crc kubenswrapper[4789]: E1124 11:31:58.178869 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:31:58 crc kubenswrapper[4789]: I1124 11:31:58.178919 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:31:58 crc kubenswrapper[4789]: I1124 11:31:58.178989 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:31:58 crc kubenswrapper[4789]: E1124 11:31:58.179186 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:31:58 crc kubenswrapper[4789]: E1124 11:31:58.179313 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:31:58 crc kubenswrapper[4789]: E1124 11:31:58.180582 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:31:59 crc kubenswrapper[4789]: I1124 11:31:59.185761 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 24 11:32:00 crc kubenswrapper[4789]: I1124 11:32:00.168712 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:32:00 crc kubenswrapper[4789]: I1124 11:32:00.168768 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:32:00 crc kubenswrapper[4789]: I1124 11:32:00.168884 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:32:00 crc kubenswrapper[4789]: E1124 11:32:00.168916 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:32:00 crc kubenswrapper[4789]: E1124 11:32:00.169083 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:32:00 crc kubenswrapper[4789]: E1124 11:32:00.169189 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:32:00 crc kubenswrapper[4789]: I1124 11:32:00.169288 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:32:00 crc kubenswrapper[4789]: E1124 11:32:00.169886 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:32:02 crc kubenswrapper[4789]: I1124 11:32:02.168744 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:32:02 crc kubenswrapper[4789]: I1124 11:32:02.168745 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:32:02 crc kubenswrapper[4789]: E1124 11:32:02.169004 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:32:02 crc kubenswrapper[4789]: I1124 11:32:02.168770 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:32:02 crc kubenswrapper[4789]: E1124 11:32:02.169178 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:32:02 crc kubenswrapper[4789]: E1124 11:32:02.169230 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:32:02 crc kubenswrapper[4789]: I1124 11:32:02.169617 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:32:02 crc kubenswrapper[4789]: E1124 11:32:02.169770 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:32:02 crc kubenswrapper[4789]: I1124 11:32:02.434894 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1033d5e6-680c-4193-aade-8c3d801b0e3f-metrics-certs\") pod \"network-metrics-daemon-s69rz\" (UID: \"1033d5e6-680c-4193-aade-8c3d801b0e3f\") " pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:32:02 crc kubenswrapper[4789]: E1124 11:32:02.435245 4789 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:32:02 crc kubenswrapper[4789]: E1124 11:32:02.435603 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1033d5e6-680c-4193-aade-8c3d801b0e3f-metrics-certs podName:1033d5e6-680c-4193-aade-8c3d801b0e3f nodeName:}" failed. No retries permitted until 2025-11-24 11:33:06.435562871 +0000 UTC m=+169.018034250 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1033d5e6-680c-4193-aade-8c3d801b0e3f-metrics-certs") pod "network-metrics-daemon-s69rz" (UID: "1033d5e6-680c-4193-aade-8c3d801b0e3f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:32:04 crc kubenswrapper[4789]: I1124 11:32:04.169028 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:32:04 crc kubenswrapper[4789]: I1124 11:32:04.169165 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:32:04 crc kubenswrapper[4789]: E1124 11:32:04.169300 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:32:04 crc kubenswrapper[4789]: I1124 11:32:04.169316 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:32:04 crc kubenswrapper[4789]: I1124 11:32:04.169355 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:32:04 crc kubenswrapper[4789]: E1124 11:32:04.169754 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:32:04 crc kubenswrapper[4789]: E1124 11:32:04.169882 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:32:04 crc kubenswrapper[4789]: E1124 11:32:04.169991 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:32:04 crc kubenswrapper[4789]: I1124 11:32:04.170204 4789 scope.go:117] "RemoveContainer" containerID="ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7" Nov 24 11:32:04 crc kubenswrapper[4789]: E1124 11:32:04.170376 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n4hd6_openshift-ovn-kubernetes(c6d361cd-fbb3-466d-9026-4c685922072f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" Nov 24 11:32:06 crc kubenswrapper[4789]: I1124 11:32:06.168934 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:32:06 crc kubenswrapper[4789]: I1124 11:32:06.168971 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:32:06 crc kubenswrapper[4789]: I1124 11:32:06.169073 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:32:06 crc kubenswrapper[4789]: E1124 11:32:06.169074 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:32:06 crc kubenswrapper[4789]: I1124 11:32:06.169112 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:32:06 crc kubenswrapper[4789]: E1124 11:32:06.169159 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:32:06 crc kubenswrapper[4789]: E1124 11:32:06.169197 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:32:06 crc kubenswrapper[4789]: E1124 11:32:06.169230 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:32:08 crc kubenswrapper[4789]: I1124 11:32:08.168999 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:32:08 crc kubenswrapper[4789]: I1124 11:32:08.169042 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:32:08 crc kubenswrapper[4789]: I1124 11:32:08.169165 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:32:08 crc kubenswrapper[4789]: I1124 11:32:08.170315 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:32:08 crc kubenswrapper[4789]: E1124 11:32:08.170309 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:32:08 crc kubenswrapper[4789]: E1124 11:32:08.173626 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:32:08 crc kubenswrapper[4789]: E1124 11:32:08.173792 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:32:08 crc kubenswrapper[4789]: E1124 11:32:08.174032 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:32:10 crc kubenswrapper[4789]: I1124 11:32:10.168980 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:32:10 crc kubenswrapper[4789]: I1124 11:32:10.168981 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:32:10 crc kubenswrapper[4789]: I1124 11:32:10.168996 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:32:10 crc kubenswrapper[4789]: I1124 11:32:10.170352 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:32:10 crc kubenswrapper[4789]: E1124 11:32:10.170590 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:32:10 crc kubenswrapper[4789]: E1124 11:32:10.170876 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:32:10 crc kubenswrapper[4789]: E1124 11:32:10.171013 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:32:10 crc kubenswrapper[4789]: E1124 11:32:10.171128 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:32:12 crc kubenswrapper[4789]: I1124 11:32:12.169038 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:32:12 crc kubenswrapper[4789]: I1124 11:32:12.169088 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:32:12 crc kubenswrapper[4789]: I1124 11:32:12.169058 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:32:12 crc kubenswrapper[4789]: I1124 11:32:12.169058 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:32:12 crc kubenswrapper[4789]: E1124 11:32:12.169212 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:32:12 crc kubenswrapper[4789]: E1124 11:32:12.169292 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:32:12 crc kubenswrapper[4789]: E1124 11:32:12.169365 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:32:12 crc kubenswrapper[4789]: E1124 11:32:12.169437 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:32:14 crc kubenswrapper[4789]: I1124 11:32:14.168502 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:32:14 crc kubenswrapper[4789]: E1124 11:32:14.168713 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:32:14 crc kubenswrapper[4789]: I1124 11:32:14.168801 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:32:14 crc kubenswrapper[4789]: E1124 11:32:14.168882 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:32:14 crc kubenswrapper[4789]: I1124 11:32:14.169270 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:32:14 crc kubenswrapper[4789]: E1124 11:32:14.169356 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:32:14 crc kubenswrapper[4789]: I1124 11:32:14.169585 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:32:14 crc kubenswrapper[4789]: E1124 11:32:14.169671 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:32:16 crc kubenswrapper[4789]: I1124 11:32:16.168393 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:32:16 crc kubenswrapper[4789]: I1124 11:32:16.168445 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:32:16 crc kubenswrapper[4789]: I1124 11:32:16.168476 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:32:16 crc kubenswrapper[4789]: I1124 11:32:16.168488 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:32:16 crc kubenswrapper[4789]: E1124 11:32:16.169059 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:32:16 crc kubenswrapper[4789]: E1124 11:32:16.169124 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:32:16 crc kubenswrapper[4789]: E1124 11:32:16.169180 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:32:16 crc kubenswrapper[4789]: E1124 11:32:16.169329 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:32:18 crc kubenswrapper[4789]: E1124 11:32:18.119788 4789 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 24 11:32:18 crc kubenswrapper[4789]: I1124 11:32:18.169215 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:32:18 crc kubenswrapper[4789]: I1124 11:32:18.169247 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:32:18 crc kubenswrapper[4789]: I1124 11:32:18.169326 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:32:18 crc kubenswrapper[4789]: E1124 11:32:18.170952 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:32:18 crc kubenswrapper[4789]: I1124 11:32:18.171012 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:32:18 crc kubenswrapper[4789]: E1124 11:32:18.171251 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:32:18 crc kubenswrapper[4789]: E1124 11:32:18.171787 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:32:18 crc kubenswrapper[4789]: E1124 11:32:18.171924 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:32:18 crc kubenswrapper[4789]: E1124 11:32:18.260029 4789 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 11:32:18 crc kubenswrapper[4789]: I1124 11:32:18.793939 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5fgg5_776a7cdb-6468-4e8a-8577-3535ff549781/kube-multus/1.log" Nov 24 11:32:18 crc kubenswrapper[4789]: I1124 11:32:18.794681 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5fgg5_776a7cdb-6468-4e8a-8577-3535ff549781/kube-multus/0.log" Nov 24 11:32:18 crc kubenswrapper[4789]: I1124 11:32:18.795125 4789 generic.go:334] "Generic (PLEG): container finished" podID="776a7cdb-6468-4e8a-8577-3535ff549781" containerID="d61abcc33b471ae4b6dd594629a2287b59f66577b200848232023fa03a32aad1" exitCode=1 Nov 24 11:32:18 crc kubenswrapper[4789]: I1124 11:32:18.795261 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5fgg5" event={"ID":"776a7cdb-6468-4e8a-8577-3535ff549781","Type":"ContainerDied","Data":"d61abcc33b471ae4b6dd594629a2287b59f66577b200848232023fa03a32aad1"} Nov 24 11:32:18 crc kubenswrapper[4789]: I1124 11:32:18.795588 4789 scope.go:117] "RemoveContainer" containerID="7a9c256912e5f9308382925d83cd341ff711fdd9fce20f0c76d22f59033bfbf8" Nov 24 11:32:18 crc kubenswrapper[4789]: I1124 11:32:18.797750 4789 scope.go:117] "RemoveContainer" containerID="d61abcc33b471ae4b6dd594629a2287b59f66577b200848232023fa03a32aad1" Nov 24 11:32:18 crc kubenswrapper[4789]: E1124 11:32:18.798084 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-5fgg5_openshift-multus(776a7cdb-6468-4e8a-8577-3535ff549781)\"" pod="openshift-multus/multus-5fgg5" podUID="776a7cdb-6468-4e8a-8577-3535ff549781" Nov 24 11:32:18 crc kubenswrapper[4789]: I1124 11:32:18.824481 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=19.824440698 podStartE2EDuration="19.824440698s" podCreationTimestamp="2025-11-24 11:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:08.207911204 +0000 UTC m=+110.790382583" watchObservedRunningTime="2025-11-24 11:32:18.824440698 +0000 UTC m=+121.406912077" Nov 24 11:32:19 crc kubenswrapper[4789]: I1124 11:32:19.170113 4789 scope.go:117] "RemoveContainer" containerID="ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7" Nov 24 11:32:19 crc kubenswrapper[4789]: I1124 11:32:19.800420 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4hd6_c6d361cd-fbb3-466d-9026-4c685922072f/ovnkube-controller/3.log" Nov 24 11:32:19 crc kubenswrapper[4789]: I1124 11:32:19.803025 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" event={"ID":"c6d361cd-fbb3-466d-9026-4c685922072f","Type":"ContainerStarted","Data":"abbfbb4dd6f082a5fba6b758e7bd41053e79e50f0d7cfbca13f4d8ca6859a54c"} Nov 24 11:32:19 crc kubenswrapper[4789]: I1124 11:32:19.804021 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:32:19 crc kubenswrapper[4789]: I1124 11:32:19.804398 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5fgg5_776a7cdb-6468-4e8a-8577-3535ff549781/kube-multus/1.log" Nov 24 11:32:19 crc kubenswrapper[4789]: I1124 11:32:19.841767 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" podStartSLOduration=96.841750323 podStartE2EDuration="1m36.841750323s" podCreationTimestamp="2025-11-24 11:30:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:19.84090299 +0000 UTC m=+122.423374419" watchObservedRunningTime="2025-11-24 11:32:19.841750323 +0000 UTC m=+122.424221702" Nov 24 11:32:20 crc kubenswrapper[4789]: I1124 11:32:20.026335 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-s69rz"] Nov 24 11:32:20 crc kubenswrapper[4789]: I1124 11:32:20.026476 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:32:20 crc kubenswrapper[4789]: E1124 11:32:20.026576 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:32:20 crc kubenswrapper[4789]: I1124 11:32:20.172706 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:32:20 crc kubenswrapper[4789]: I1124 11:32:20.172741 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:32:20 crc kubenswrapper[4789]: I1124 11:32:20.173158 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:32:20 crc kubenswrapper[4789]: E1124 11:32:20.173725 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:32:20 crc kubenswrapper[4789]: E1124 11:32:20.173949 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:32:20 crc kubenswrapper[4789]: E1124 11:32:20.174066 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:32:22 crc kubenswrapper[4789]: I1124 11:32:22.168553 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:32:22 crc kubenswrapper[4789]: I1124 11:32:22.168663 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:32:22 crc kubenswrapper[4789]: I1124 11:32:22.168553 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:32:22 crc kubenswrapper[4789]: I1124 11:32:22.168763 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:32:22 crc kubenswrapper[4789]: E1124 11:32:22.168769 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:32:22 crc kubenswrapper[4789]: E1124 11:32:22.168954 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:32:22 crc kubenswrapper[4789]: E1124 11:32:22.169037 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:32:22 crc kubenswrapper[4789]: E1124 11:32:22.169170 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:32:23 crc kubenswrapper[4789]: E1124 11:32:23.261246 4789 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 11:32:24 crc kubenswrapper[4789]: I1124 11:32:24.168886 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:32:24 crc kubenswrapper[4789]: I1124 11:32:24.168944 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:32:24 crc kubenswrapper[4789]: I1124 11:32:24.168912 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:32:24 crc kubenswrapper[4789]: I1124 11:32:24.168911 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:32:24 crc kubenswrapper[4789]: E1124 11:32:24.169062 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:32:24 crc kubenswrapper[4789]: E1124 11:32:24.169439 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:32:24 crc kubenswrapper[4789]: E1124 11:32:24.169556 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:32:24 crc kubenswrapper[4789]: E1124 11:32:24.169626 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:32:26 crc kubenswrapper[4789]: I1124 11:32:26.168435 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:32:26 crc kubenswrapper[4789]: I1124 11:32:26.168570 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:32:26 crc kubenswrapper[4789]: I1124 11:32:26.168582 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:32:26 crc kubenswrapper[4789]: I1124 11:32:26.168557 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:32:26 crc kubenswrapper[4789]: E1124 11:32:26.168736 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:32:26 crc kubenswrapper[4789]: E1124 11:32:26.168880 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:32:26 crc kubenswrapper[4789]: E1124 11:32:26.168973 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:32:26 crc kubenswrapper[4789]: E1124 11:32:26.169043 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:32:28 crc kubenswrapper[4789]: I1124 11:32:28.168399 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:32:28 crc kubenswrapper[4789]: I1124 11:32:28.168432 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:32:28 crc kubenswrapper[4789]: I1124 11:32:28.168541 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:32:28 crc kubenswrapper[4789]: I1124 11:32:28.168569 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:32:28 crc kubenswrapper[4789]: E1124 11:32:28.170087 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:32:28 crc kubenswrapper[4789]: E1124 11:32:28.170202 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:32:28 crc kubenswrapper[4789]: E1124 11:32:28.170350 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:32:28 crc kubenswrapper[4789]: E1124 11:32:28.170433 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:32:28 crc kubenswrapper[4789]: E1124 11:32:28.262995 4789 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 11:32:30 crc kubenswrapper[4789]: I1124 11:32:30.168861 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:32:30 crc kubenswrapper[4789]: I1124 11:32:30.168949 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:32:30 crc kubenswrapper[4789]: E1124 11:32:30.169408 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:32:30 crc kubenswrapper[4789]: I1124 11:32:30.169188 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:32:30 crc kubenswrapper[4789]: E1124 11:32:30.169560 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:32:30 crc kubenswrapper[4789]: I1124 11:32:30.168965 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:32:30 crc kubenswrapper[4789]: E1124 11:32:30.169629 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:32:30 crc kubenswrapper[4789]: E1124 11:32:30.169674 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:32:30 crc kubenswrapper[4789]: I1124 11:32:30.170607 4789 scope.go:117] "RemoveContainer" containerID="d61abcc33b471ae4b6dd594629a2287b59f66577b200848232023fa03a32aad1" Nov 24 11:32:30 crc kubenswrapper[4789]: I1124 11:32:30.852759 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5fgg5_776a7cdb-6468-4e8a-8577-3535ff549781/kube-multus/1.log" Nov 24 11:32:30 crc kubenswrapper[4789]: I1124 11:32:30.852823 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5fgg5" event={"ID":"776a7cdb-6468-4e8a-8577-3535ff549781","Type":"ContainerStarted","Data":"203e3c34a84e87a42786ebf6949054419d8b261ddf1df1c709a9e12b3299b362"} Nov 24 11:32:32 crc kubenswrapper[4789]: I1124 11:32:32.169151 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:32:32 crc kubenswrapper[4789]: I1124 11:32:32.169179 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:32:32 crc kubenswrapper[4789]: E1124 11:32:32.169377 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-s69rz" podUID="1033d5e6-680c-4193-aade-8c3d801b0e3f" Nov 24 11:32:32 crc kubenswrapper[4789]: I1124 11:32:32.169584 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:32:32 crc kubenswrapper[4789]: E1124 11:32:32.169821 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:32:32 crc kubenswrapper[4789]: I1124 11:32:32.170073 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:32:32 crc kubenswrapper[4789]: E1124 11:32:32.170185 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:32:32 crc kubenswrapper[4789]: E1124 11:32:32.170350 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:32:34 crc kubenswrapper[4789]: I1124 11:32:34.168441 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:32:34 crc kubenswrapper[4789]: I1124 11:32:34.168582 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:32:34 crc kubenswrapper[4789]: I1124 11:32:34.168678 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:32:34 crc kubenswrapper[4789]: I1124 11:32:34.170017 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:32:34 crc kubenswrapper[4789]: I1124 11:32:34.175223 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 24 11:32:34 crc kubenswrapper[4789]: I1124 11:32:34.176206 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 24 11:32:34 crc kubenswrapper[4789]: I1124 11:32:34.176386 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 24 11:32:34 crc kubenswrapper[4789]: I1124 11:32:34.176686 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 24 11:32:34 crc kubenswrapper[4789]: I1124 11:32:34.181062 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 24 11:32:34 crc kubenswrapper[4789]: I1124 11:32:34.183410 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.020180 4789 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.082648 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5lt8v"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.083512 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5lt8v" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.086373 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-jdbnn"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.087406 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jdbnn" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.092283 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-j4swj"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.093283 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-j4swj" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.094060 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-z7ndg"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.095096 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z7ndg" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.095806 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-gtxzr"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.096978 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.098860 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.099100 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.099390 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.099673 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.100020 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.100311 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.100532 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.100649 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-kssj7"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.101610 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-kssj7" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.105566 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.105610 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.113350 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lk2l"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.113956 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-bp2hb"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.114497 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.114510 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.115061 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lk2l" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.115772 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-spvgg"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.116351 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-spvgg" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.116633 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.124085 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.129703 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.129749 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.141235 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.141757 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.142149 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.146919 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.161630 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.161765 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.161820 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.163284 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.163425 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.163534 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.164569 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-ljwn7"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.164953 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.165444 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.165626 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.165763 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.165795 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.166004 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.166190 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.166034 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.167955 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.166798 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-svr79"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.168505 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-28skr"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.166869 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ljwn7" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.168762 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-svr79" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.166061 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.169439 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-28skr" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.166369 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.166406 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.166448 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.166511 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.166530 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.170161 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.166546 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.166559 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.166590 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.170356 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.166655 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.166690 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.166821 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.166877 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.166922 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.166940 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.166999 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.167057 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.167095 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.167312 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.167788 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.168496 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.171125 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.168885 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.169009 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.169093 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.169330 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.169361 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.169593 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.177655 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.178739 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-mlcwl"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.179275 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-mlcwl" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.181001 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.185679 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-t2scc"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.189063 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-klw64"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.189437 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-klw64" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.189551 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.189633 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.189856 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.189886 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.189938 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-t2scc" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.190102 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.190125 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.190228 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.190341 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.190380 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.190475 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.190597 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.190614 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.191578 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.191726 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.195858 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.196863 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.200570 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-v7zss"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.201000 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.201132 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-v7zss" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.201067 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-q52tc"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.218167 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-j4swj"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.218308 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.221368 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.221387 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.222748 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.223075 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.223158 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rc4ml"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.239322 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.239934 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.240433 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9380ccce-963f-42e6-b182-65e9bbf9f47e-config\") pod \"authentication-operator-69f744f599-kssj7\" (UID: \"9380ccce-963f-42e6-b182-65e9bbf9f47e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kssj7" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.260537 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-bp2hb\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.260726 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5489e784-b2d8-47f6-87b7-4c0b0786caaf-config\") pod \"machine-approver-56656f9798-z7ndg\" (UID: \"5489e784-b2d8-47f6-87b7-4c0b0786caaf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z7ndg" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.260821 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/584e1901-c470-4a3f-9461-7e97f4688399-serving-cert\") pod \"route-controller-manager-6576b87f9c-5lt8v\" (UID: \"584e1901-c470-4a3f-9461-7e97f4688399\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5lt8v" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.261235 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-jdbnn\" (UID: \"17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jdbnn" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.262619 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4372e46e-19ca-487e-b2ee-1fea92a3197d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-j4swj\" (UID: \"4372e46e-19ca-487e-b2ee-1fea92a3197d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-j4swj" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.262747 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9380ccce-963f-42e6-b182-65e9bbf9f47e-serving-cert\") pod \"authentication-operator-69f744f599-kssj7\" (UID: \"9380ccce-963f-42e6-b182-65e9bbf9f47e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kssj7" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.262845 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgxv4\" (UniqueName: \"kubernetes.io/projected/4372e46e-19ca-487e-b2ee-1fea92a3197d-kube-api-access-zgxv4\") pod \"controller-manager-879f6c89f-j4swj\" (UID: \"4372e46e-19ca-487e-b2ee-1fea92a3197d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-j4swj" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.263715 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6klb\" (UniqueName: \"kubernetes.io/projected/c9a07607-7a0f-4436-a3bc-9bd2cbf61663-kube-api-access-r6klb\") pod \"console-f9d7485db-ljwn7\" (UID: \"c9a07607-7a0f-4436-a3bc-9bd2cbf61663\") " pod="openshift-console/console-f9d7485db-ljwn7" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.266386 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-bp2hb\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.266530 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j6pl\" (UniqueName: \"kubernetes.io/projected/17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643-kube-api-access-4j6pl\") pod \"apiserver-7bbb656c7d-jdbnn\" (UID: \"17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jdbnn" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.266609 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9a07607-7a0f-4436-a3bc-9bd2cbf61663-console-serving-cert\") pod \"console-f9d7485db-ljwn7\" (UID: \"c9a07607-7a0f-4436-a3bc-9bd2cbf61663\") " pod="openshift-console/console-f9d7485db-ljwn7" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.266681 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22cf157e-ce67-43f4-bbaf-577720728887-serving-cert\") pod \"apiserver-76f77b778f-gtxzr\" (UID: \"22cf157e-ce67-43f4-bbaf-577720728887\") " pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.266747 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-bp2hb\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.267518 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/584e1901-c470-4a3f-9461-7e97f4688399-client-ca\") pod \"route-controller-manager-6576b87f9c-5lt8v\" (UID: \"584e1901-c470-4a3f-9461-7e97f4688399\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5lt8v" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.267611 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643-encryption-config\") pod \"apiserver-7bbb656c7d-jdbnn\" (UID: \"17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jdbnn" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.267687 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/026c0fd3-78be-48ef-81cd-ba63abb9197d-audit-policies\") pod \"oauth-openshift-558db77b4-bp2hb\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.267756 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-bp2hb\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.267834 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643-etcd-client\") pod \"apiserver-7bbb656c7d-jdbnn\" (UID: \"17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jdbnn" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.267899 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wj45\" (UniqueName: \"kubernetes.io/projected/22cf157e-ce67-43f4-bbaf-577720728887-kube-api-access-7wj45\") pod \"apiserver-76f77b778f-gtxzr\" (UID: \"22cf157e-ce67-43f4-bbaf-577720728887\") " pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.267965 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22cf157e-ce67-43f4-bbaf-577720728887-trusted-ca-bundle\") pod \"apiserver-76f77b778f-gtxzr\" (UID: \"22cf157e-ce67-43f4-bbaf-577720728887\") " pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.268027 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-bp2hb\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.268097 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c9a07607-7a0f-4436-a3bc-9bd2cbf61663-service-ca\") pod \"console-f9d7485db-ljwn7\" (UID: \"c9a07607-7a0f-4436-a3bc-9bd2cbf61663\") " pod="openshift-console/console-f9d7485db-ljwn7" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.268181 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22cf157e-ce67-43f4-bbaf-577720728887-config\") pod \"apiserver-76f77b778f-gtxzr\" (UID: \"22cf157e-ce67-43f4-bbaf-577720728887\") " pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.268254 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-bp2hb\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.268324 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-bp2hb\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.268400 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4372e46e-19ca-487e-b2ee-1fea92a3197d-serving-cert\") pod \"controller-manager-879f6c89f-j4swj\" (UID: \"4372e46e-19ca-487e-b2ee-1fea92a3197d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-j4swj" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.271586 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9da2bc3-3945-4a02-8613-39338321441d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6lk2l\" (UID: \"c9da2bc3-3945-4a02-8613-39338321441d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lk2l" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.271686 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thlhn\" (UniqueName: \"kubernetes.io/projected/5489e784-b2d8-47f6-87b7-4c0b0786caaf-kube-api-access-thlhn\") pod \"machine-approver-56656f9798-z7ndg\" (UID: \"5489e784-b2d8-47f6-87b7-4c0b0786caaf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z7ndg" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.271761 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/22cf157e-ce67-43f4-bbaf-577720728887-image-import-ca\") pod \"apiserver-76f77b778f-gtxzr\" (UID: \"22cf157e-ce67-43f4-bbaf-577720728887\") " pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.271839 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9a07607-7a0f-4436-a3bc-9bd2cbf61663-trusted-ca-bundle\") pod \"console-f9d7485db-ljwn7\" (UID: \"c9a07607-7a0f-4436-a3bc-9bd2cbf61663\") " pod="openshift-console/console-f9d7485db-ljwn7" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.271904 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-bp2hb\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.271978 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57bvz\" (UniqueName: \"kubernetes.io/projected/584e1901-c470-4a3f-9461-7e97f4688399-kube-api-access-57bvz\") pod \"route-controller-manager-6576b87f9c-5lt8v\" (UID: \"584e1901-c470-4a3f-9461-7e97f4688399\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5lt8v" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.272054 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bb760fa5-0dd1-4298-87de-d2cb1a0d3e0b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-spvgg\" (UID: \"bb760fa5-0dd1-4298-87de-d2cb1a0d3e0b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-spvgg" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.272127 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4372e46e-19ca-487e-b2ee-1fea92a3197d-config\") pod \"controller-manager-879f6c89f-j4swj\" (UID: \"4372e46e-19ca-487e-b2ee-1fea92a3197d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-j4swj" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.272201 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb6q4\" (UniqueName: \"kubernetes.io/projected/c9da2bc3-3945-4a02-8613-39338321441d-kube-api-access-vb6q4\") pod \"openshift-controller-manager-operator-756b6f6bc6-6lk2l\" (UID: \"c9da2bc3-3945-4a02-8613-39338321441d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lk2l" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.272275 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-bp2hb\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.272347 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9da2bc3-3945-4a02-8613-39338321441d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6lk2l\" (UID: \"c9da2bc3-3945-4a02-8613-39338321441d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lk2l" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.272427 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c9a07607-7a0f-4436-a3bc-9bd2cbf61663-console-config\") pod \"console-f9d7485db-ljwn7\" (UID: \"c9a07607-7a0f-4436-a3bc-9bd2cbf61663\") " pod="openshift-console/console-f9d7485db-ljwn7" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.272545 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c9a07607-7a0f-4436-a3bc-9bd2cbf61663-oauth-serving-cert\") pod \"console-f9d7485db-ljwn7\" (UID: \"c9a07607-7a0f-4436-a3bc-9bd2cbf61663\") " pod="openshift-console/console-f9d7485db-ljwn7" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.272641 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/22cf157e-ce67-43f4-bbaf-577720728887-etcd-serving-ca\") pod \"apiserver-76f77b778f-gtxzr\" (UID: \"22cf157e-ce67-43f4-bbaf-577720728887\") " pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.272739 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/584e1901-c470-4a3f-9461-7e97f4688399-config\") pod \"route-controller-manager-6576b87f9c-5lt8v\" (UID: \"584e1901-c470-4a3f-9461-7e97f4688399\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5lt8v" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.272849 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/22cf157e-ce67-43f4-bbaf-577720728887-audit-dir\") pod \"apiserver-76f77b778f-gtxzr\" (UID: \"22cf157e-ce67-43f4-bbaf-577720728887\") " pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.272923 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9380ccce-963f-42e6-b182-65e9bbf9f47e-service-ca-bundle\") pod \"authentication-operator-69f744f599-kssj7\" (UID: \"9380ccce-963f-42e6-b182-65e9bbf9f47e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kssj7" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.272994 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb58z\" (UniqueName: \"kubernetes.io/projected/026c0fd3-78be-48ef-81cd-ba63abb9197d-kube-api-access-xb58z\") pod \"oauth-openshift-558db77b4-bp2hb\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.273067 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4372e46e-19ca-487e-b2ee-1fea92a3197d-client-ca\") pod \"controller-manager-879f6c89f-j4swj\" (UID: \"4372e46e-19ca-487e-b2ee-1fea92a3197d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-j4swj" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.273133 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-jdbnn\" (UID: \"17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jdbnn" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.273202 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5489e784-b2d8-47f6-87b7-4c0b0786caaf-auth-proxy-config\") pod \"machine-approver-56656f9798-z7ndg\" (UID: \"5489e784-b2d8-47f6-87b7-4c0b0786caaf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z7ndg" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.273279 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643-serving-cert\") pod \"apiserver-7bbb656c7d-jdbnn\" (UID: \"17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jdbnn" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.273361 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/22cf157e-ce67-43f4-bbaf-577720728887-audit\") pod \"apiserver-76f77b778f-gtxzr\" (UID: \"22cf157e-ce67-43f4-bbaf-577720728887\") " pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.273436 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-bp2hb\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.273561 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5489e784-b2d8-47f6-87b7-4c0b0786caaf-machine-approver-tls\") pod \"machine-approver-56656f9798-z7ndg\" (UID: \"5489e784-b2d8-47f6-87b7-4c0b0786caaf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z7ndg" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.273633 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643-audit-policies\") pod \"apiserver-7bbb656c7d-jdbnn\" (UID: \"17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jdbnn" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.273717 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2v5m\" (UniqueName: \"kubernetes.io/projected/9380ccce-963f-42e6-b182-65e9bbf9f47e-kube-api-access-k2v5m\") pod \"authentication-operator-69f744f599-kssj7\" (UID: \"9380ccce-963f-42e6-b182-65e9bbf9f47e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kssj7" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.273785 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/026c0fd3-78be-48ef-81cd-ba63abb9197d-audit-dir\") pod \"oauth-openshift-558db77b4-bp2hb\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.273867 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-bp2hb\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.273973 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg7b8\" (UniqueName: \"kubernetes.io/projected/bb760fa5-0dd1-4298-87de-d2cb1a0d3e0b-kube-api-access-xg7b8\") pod \"openshift-config-operator-7777fb866f-spvgg\" (UID: \"bb760fa5-0dd1-4298-87de-d2cb1a0d3e0b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-spvgg" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.274059 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/22cf157e-ce67-43f4-bbaf-577720728887-etcd-client\") pod \"apiserver-76f77b778f-gtxzr\" (UID: \"22cf157e-ce67-43f4-bbaf-577720728887\") " pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.274133 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9380ccce-963f-42e6-b182-65e9bbf9f47e-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-kssj7\" (UID: \"9380ccce-963f-42e6-b182-65e9bbf9f47e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kssj7" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.274206 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c9a07607-7a0f-4436-a3bc-9bd2cbf61663-console-oauth-config\") pod \"console-f9d7485db-ljwn7\" (UID: \"c9a07607-7a0f-4436-a3bc-9bd2cbf61663\") " pod="openshift-console/console-f9d7485db-ljwn7" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.274269 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/22cf157e-ce67-43f4-bbaf-577720728887-encryption-config\") pod \"apiserver-76f77b778f-gtxzr\" (UID: \"22cf157e-ce67-43f4-bbaf-577720728887\") " pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.274350 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/22cf157e-ce67-43f4-bbaf-577720728887-node-pullsecrets\") pod \"apiserver-76f77b778f-gtxzr\" (UID: \"22cf157e-ce67-43f4-bbaf-577720728887\") " pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.274426 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb760fa5-0dd1-4298-87de-d2cb1a0d3e0b-serving-cert\") pod \"openshift-config-operator-7777fb866f-spvgg\" (UID: \"bb760fa5-0dd1-4298-87de-d2cb1a0d3e0b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-spvgg" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.274520 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643-audit-dir\") pod \"apiserver-7bbb656c7d-jdbnn\" (UID: \"17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jdbnn" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.241132 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-h8dsm"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.275102 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-948ch"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.240857 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.241220 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rc4ml" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.240932 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.242364 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.242499 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.242635 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.242905 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.242994 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.248510 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.262068 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.262205 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.262300 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.263022 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.263050 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.263093 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.263125 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.264296 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.266307 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.266351 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.266359 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.279964 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-h8dsm" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.281099 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rqvqs"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.281315 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-948ch" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.281559 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmvs5"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.281905 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-k4s28"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.282061 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rqvqs" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.282372 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmvs5" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.282470 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-g7l4l"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.282909 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-69txp"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.283636 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-k4s28" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.284424 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g7l4l" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.285384 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.285610 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7fjn"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.286065 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xf9qh"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.286218 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-69txp" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.286543 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-5cgnl"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.286834 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xf9qh" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.286842 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7fjn" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.287337 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5lt8v"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.287498 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-5cgnl" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.293225 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j6s5s"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.293834 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j6s5s" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.299893 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-mjfmp"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.300505 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mjfmp" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.307918 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399730-77vnb"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.308490 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-77vnb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.310397 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.312073 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-9wk4x"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.323686 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j4dj6"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.323726 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-9wk4x" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.324299 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fxzq9"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.324846 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j4dj6" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.324979 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fxzq9" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.326079 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tpbjs"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.326650 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tpbjs" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.331551 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-jdbnn"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.345679 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-72rck"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.346603 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-72rck" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.347279 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pcnqw"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.350226 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pcnqw" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.352388 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-hqkkq"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.352981 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hqkkq" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.353349 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.354562 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-svr79"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.356145 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-spvgg"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.357858 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-28skr"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.359766 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-hk9wh"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.360857 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-hk9wh" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.362296 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-kssj7"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.363884 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-mlcwl"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.365426 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-ljwn7"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.365710 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.369064 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-bp2hb"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.370983 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-k4s28"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.372484 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-gtxzr"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.374043 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xf9qh"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.375090 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c9a07607-7a0f-4436-a3bc-9bd2cbf61663-console-config\") pod \"console-f9d7485db-ljwn7\" (UID: \"c9a07607-7a0f-4436-a3bc-9bd2cbf61663\") " pod="openshift-console/console-f9d7485db-ljwn7" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.375217 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c9a07607-7a0f-4436-a3bc-9bd2cbf61663-oauth-serving-cert\") pod \"console-f9d7485db-ljwn7\" (UID: \"c9a07607-7a0f-4436-a3bc-9bd2cbf61663\") " pod="openshift-console/console-f9d7485db-ljwn7" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.375324 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/22cf157e-ce67-43f4-bbaf-577720728887-etcd-serving-ca\") pod \"apiserver-76f77b778f-gtxzr\" (UID: \"22cf157e-ce67-43f4-bbaf-577720728887\") " pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.375436 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4372e46e-19ca-487e-b2ee-1fea92a3197d-client-ca\") pod \"controller-manager-879f6c89f-j4swj\" (UID: \"4372e46e-19ca-487e-b2ee-1fea92a3197d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-j4swj" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.375558 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/584e1901-c470-4a3f-9461-7e97f4688399-config\") pod \"route-controller-manager-6576b87f9c-5lt8v\" (UID: \"584e1901-c470-4a3f-9461-7e97f4688399\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5lt8v" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.375657 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/22cf157e-ce67-43f4-bbaf-577720728887-audit-dir\") pod \"apiserver-76f77b778f-gtxzr\" (UID: \"22cf157e-ce67-43f4-bbaf-577720728887\") " pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.375755 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9380ccce-963f-42e6-b182-65e9bbf9f47e-service-ca-bundle\") pod \"authentication-operator-69f744f599-kssj7\" (UID: \"9380ccce-963f-42e6-b182-65e9bbf9f47e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kssj7" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.375853 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb58z\" (UniqueName: \"kubernetes.io/projected/026c0fd3-78be-48ef-81cd-ba63abb9197d-kube-api-access-xb58z\") pod \"oauth-openshift-558db77b4-bp2hb\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.375944 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-jdbnn\" (UID: \"17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jdbnn" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.376047 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5489e784-b2d8-47f6-87b7-4c0b0786caaf-auth-proxy-config\") pod \"machine-approver-56656f9798-z7ndg\" (UID: \"5489e784-b2d8-47f6-87b7-4c0b0786caaf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z7ndg" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.376141 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5489e784-b2d8-47f6-87b7-4c0b0786caaf-machine-approver-tls\") pod \"machine-approver-56656f9798-z7ndg\" (UID: \"5489e784-b2d8-47f6-87b7-4c0b0786caaf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z7ndg" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.376231 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643-audit-policies\") pod \"apiserver-7bbb656c7d-jdbnn\" (UID: \"17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jdbnn" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.376322 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643-serving-cert\") pod \"apiserver-7bbb656c7d-jdbnn\" (UID: \"17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jdbnn" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.376416 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/22cf157e-ce67-43f4-bbaf-577720728887-audit\") pod \"apiserver-76f77b778f-gtxzr\" (UID: \"22cf157e-ce67-43f4-bbaf-577720728887\") " pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.376586 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-bp2hb\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.376815 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2v5m\" (UniqueName: \"kubernetes.io/projected/9380ccce-963f-42e6-b182-65e9bbf9f47e-kube-api-access-k2v5m\") pod \"authentication-operator-69f744f599-kssj7\" (UID: \"9380ccce-963f-42e6-b182-65e9bbf9f47e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kssj7" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.376944 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/026c0fd3-78be-48ef-81cd-ba63abb9197d-audit-dir\") pod \"oauth-openshift-558db77b4-bp2hb\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.377033 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-bp2hb\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.377131 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg7b8\" (UniqueName: \"kubernetes.io/projected/bb760fa5-0dd1-4298-87de-d2cb1a0d3e0b-kube-api-access-xg7b8\") pod \"openshift-config-operator-7777fb866f-spvgg\" (UID: \"bb760fa5-0dd1-4298-87de-d2cb1a0d3e0b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-spvgg" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.377216 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9380ccce-963f-42e6-b182-65e9bbf9f47e-service-ca-bundle\") pod \"authentication-operator-69f744f599-kssj7\" (UID: \"9380ccce-963f-42e6-b182-65e9bbf9f47e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kssj7" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.377226 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/22cf157e-ce67-43f4-bbaf-577720728887-etcd-client\") pod \"apiserver-76f77b778f-gtxzr\" (UID: \"22cf157e-ce67-43f4-bbaf-577720728887\") " pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.377294 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c9a07607-7a0f-4436-a3bc-9bd2cbf61663-console-oauth-config\") pod \"console-f9d7485db-ljwn7\" (UID: \"c9a07607-7a0f-4436-a3bc-9bd2cbf61663\") " pod="openshift-console/console-f9d7485db-ljwn7" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.377319 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/22cf157e-ce67-43f4-bbaf-577720728887-encryption-config\") pod \"apiserver-76f77b778f-gtxzr\" (UID: \"22cf157e-ce67-43f4-bbaf-577720728887\") " pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.377342 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9380ccce-963f-42e6-b182-65e9bbf9f47e-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-kssj7\" (UID: \"9380ccce-963f-42e6-b182-65e9bbf9f47e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kssj7" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.377366 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb760fa5-0dd1-4298-87de-d2cb1a0d3e0b-serving-cert\") pod \"openshift-config-operator-7777fb866f-spvgg\" (UID: \"bb760fa5-0dd1-4298-87de-d2cb1a0d3e0b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-spvgg" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.377387 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643-audit-dir\") pod \"apiserver-7bbb656c7d-jdbnn\" (UID: \"17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jdbnn" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.377412 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/22cf157e-ce67-43f4-bbaf-577720728887-node-pullsecrets\") pod \"apiserver-76f77b778f-gtxzr\" (UID: \"22cf157e-ce67-43f4-bbaf-577720728887\") " pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.377436 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9380ccce-963f-42e6-b182-65e9bbf9f47e-config\") pod \"authentication-operator-69f744f599-kssj7\" (UID: \"9380ccce-963f-42e6-b182-65e9bbf9f47e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kssj7" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.377485 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-bp2hb\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.377528 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5489e784-b2d8-47f6-87b7-4c0b0786caaf-config\") pod \"machine-approver-56656f9798-z7ndg\" (UID: \"5489e784-b2d8-47f6-87b7-4c0b0786caaf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z7ndg" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.377550 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/584e1901-c470-4a3f-9461-7e97f4688399-serving-cert\") pod \"route-controller-manager-6576b87f9c-5lt8v\" (UID: \"584e1901-c470-4a3f-9461-7e97f4688399\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5lt8v" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.377572 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-jdbnn\" (UID: \"17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jdbnn" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.377606 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4372e46e-19ca-487e-b2ee-1fea92a3197d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-j4swj\" (UID: \"4372e46e-19ca-487e-b2ee-1fea92a3197d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-j4swj" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.377632 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9380ccce-963f-42e6-b182-65e9bbf9f47e-serving-cert\") pod \"authentication-operator-69f744f599-kssj7\" (UID: \"9380ccce-963f-42e6-b182-65e9bbf9f47e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kssj7" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.377654 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgxv4\" (UniqueName: \"kubernetes.io/projected/4372e46e-19ca-487e-b2ee-1fea92a3197d-kube-api-access-zgxv4\") pod \"controller-manager-879f6c89f-j4swj\" (UID: \"4372e46e-19ca-487e-b2ee-1fea92a3197d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-j4swj" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.377675 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6klb\" (UniqueName: \"kubernetes.io/projected/c9a07607-7a0f-4436-a3bc-9bd2cbf61663-kube-api-access-r6klb\") pod \"console-f9d7485db-ljwn7\" (UID: \"c9a07607-7a0f-4436-a3bc-9bd2cbf61663\") " pod="openshift-console/console-f9d7485db-ljwn7" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.377696 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-bp2hb\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.377718 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9a07607-7a0f-4436-a3bc-9bd2cbf61663-console-serving-cert\") pod \"console-f9d7485db-ljwn7\" (UID: \"c9a07607-7a0f-4436-a3bc-9bd2cbf61663\") " pod="openshift-console/console-f9d7485db-ljwn7" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.377739 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j6pl\" (UniqueName: \"kubernetes.io/projected/17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643-kube-api-access-4j6pl\") pod \"apiserver-7bbb656c7d-jdbnn\" (UID: \"17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jdbnn" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.377767 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhrdr\" (UniqueName: \"kubernetes.io/projected/c20b0775-ba72-4379-b5df-2ff35ffc2704-kube-api-access-fhrdr\") pod \"downloads-7954f5f757-mlcwl\" (UID: \"c20b0775-ba72-4379-b5df-2ff35ffc2704\") " pod="openshift-console/downloads-7954f5f757-mlcwl" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.377789 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/584e1901-c470-4a3f-9461-7e97f4688399-client-ca\") pod \"route-controller-manager-6576b87f9c-5lt8v\" (UID: \"584e1901-c470-4a3f-9461-7e97f4688399\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5lt8v" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.377812 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643-encryption-config\") pod \"apiserver-7bbb656c7d-jdbnn\" (UID: \"17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jdbnn" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.377834 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22cf157e-ce67-43f4-bbaf-577720728887-serving-cert\") pod \"apiserver-76f77b778f-gtxzr\" (UID: \"22cf157e-ce67-43f4-bbaf-577720728887\") " pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.377855 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-bp2hb\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.377879 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643-etcd-client\") pod \"apiserver-7bbb656c7d-jdbnn\" (UID: \"17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jdbnn" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.377902 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wj45\" (UniqueName: \"kubernetes.io/projected/22cf157e-ce67-43f4-bbaf-577720728887-kube-api-access-7wj45\") pod \"apiserver-76f77b778f-gtxzr\" (UID: \"22cf157e-ce67-43f4-bbaf-577720728887\") " pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.377926 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/026c0fd3-78be-48ef-81cd-ba63abb9197d-audit-policies\") pod \"oauth-openshift-558db77b4-bp2hb\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.377952 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-bp2hb\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.377980 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22cf157e-ce67-43f4-bbaf-577720728887-trusted-ca-bundle\") pod \"apiserver-76f77b778f-gtxzr\" (UID: \"22cf157e-ce67-43f4-bbaf-577720728887\") " pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.378002 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-bp2hb\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.378025 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c9a07607-7a0f-4436-a3bc-9bd2cbf61663-service-ca\") pod \"console-f9d7485db-ljwn7\" (UID: \"c9a07607-7a0f-4436-a3bc-9bd2cbf61663\") " pod="openshift-console/console-f9d7485db-ljwn7" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.378052 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22cf157e-ce67-43f4-bbaf-577720728887-config\") pod \"apiserver-76f77b778f-gtxzr\" (UID: \"22cf157e-ce67-43f4-bbaf-577720728887\") " pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.378073 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-bp2hb\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.378097 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-bp2hb\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.378122 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4372e46e-19ca-487e-b2ee-1fea92a3197d-serving-cert\") pod \"controller-manager-879f6c89f-j4swj\" (UID: \"4372e46e-19ca-487e-b2ee-1fea92a3197d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-j4swj" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.378143 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9da2bc3-3945-4a02-8613-39338321441d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6lk2l\" (UID: \"c9da2bc3-3945-4a02-8613-39338321441d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lk2l" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.378175 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thlhn\" (UniqueName: \"kubernetes.io/projected/5489e784-b2d8-47f6-87b7-4c0b0786caaf-kube-api-access-thlhn\") pod \"machine-approver-56656f9798-z7ndg\" (UID: \"5489e784-b2d8-47f6-87b7-4c0b0786caaf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z7ndg" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.378198 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/22cf157e-ce67-43f4-bbaf-577720728887-image-import-ca\") pod \"apiserver-76f77b778f-gtxzr\" (UID: \"22cf157e-ce67-43f4-bbaf-577720728887\") " pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.378220 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57bvz\" (UniqueName: \"kubernetes.io/projected/584e1901-c470-4a3f-9461-7e97f4688399-kube-api-access-57bvz\") pod \"route-controller-manager-6576b87f9c-5lt8v\" (UID: \"584e1901-c470-4a3f-9461-7e97f4688399\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5lt8v" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.378242 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bb760fa5-0dd1-4298-87de-d2cb1a0d3e0b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-spvgg\" (UID: \"bb760fa5-0dd1-4298-87de-d2cb1a0d3e0b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-spvgg" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.378287 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9a07607-7a0f-4436-a3bc-9bd2cbf61663-trusted-ca-bundle\") pod \"console-f9d7485db-ljwn7\" (UID: \"c9a07607-7a0f-4436-a3bc-9bd2cbf61663\") " pod="openshift-console/console-f9d7485db-ljwn7" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.378309 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-bp2hb\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.378335 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4372e46e-19ca-487e-b2ee-1fea92a3197d-config\") pod \"controller-manager-879f6c89f-j4swj\" (UID: \"4372e46e-19ca-487e-b2ee-1fea92a3197d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-j4swj" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.378358 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb6q4\" (UniqueName: \"kubernetes.io/projected/c9da2bc3-3945-4a02-8613-39338321441d-kube-api-access-vb6q4\") pod \"openshift-controller-manager-operator-756b6f6bc6-6lk2l\" (UID: \"c9da2bc3-3945-4a02-8613-39338321441d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lk2l" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.378382 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9da2bc3-3945-4a02-8613-39338321441d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6lk2l\" (UID: \"c9da2bc3-3945-4a02-8613-39338321441d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lk2l" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.378403 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-bp2hb\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.378839 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-klw64"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.378888 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-g7l4l"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.378904 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-q52tc"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.382163 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9380ccce-963f-42e6-b182-65e9bbf9f47e-config\") pod \"authentication-operator-69f744f599-kssj7\" (UID: \"9380ccce-963f-42e6-b182-65e9bbf9f47e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kssj7" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.382824 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c9a07607-7a0f-4436-a3bc-9bd2cbf61663-oauth-serving-cert\") pod \"console-f9d7485db-ljwn7\" (UID: \"c9a07607-7a0f-4436-a3bc-9bd2cbf61663\") " pod="openshift-console/console-f9d7485db-ljwn7" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.383275 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/22cf157e-ce67-43f4-bbaf-577720728887-etcd-serving-ca\") pod \"apiserver-76f77b778f-gtxzr\" (UID: \"22cf157e-ce67-43f4-bbaf-577720728887\") " pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.384042 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4372e46e-19ca-487e-b2ee-1fea92a3197d-client-ca\") pod \"controller-manager-879f6c89f-j4swj\" (UID: \"4372e46e-19ca-487e-b2ee-1fea92a3197d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-j4swj" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.385790 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c9a07607-7a0f-4436-a3bc-9bd2cbf61663-console-config\") pod \"console-f9d7485db-ljwn7\" (UID: \"c9a07607-7a0f-4436-a3bc-9bd2cbf61663\") " pod="openshift-console/console-f9d7485db-ljwn7" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.386263 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-bp2hb\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.386626 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5489e784-b2d8-47f6-87b7-4c0b0786caaf-config\") pod \"machine-approver-56656f9798-z7ndg\" (UID: \"5489e784-b2d8-47f6-87b7-4c0b0786caaf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z7ndg" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.387643 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-jdbnn\" (UID: \"17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jdbnn" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.388164 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-jdbnn\" (UID: \"17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jdbnn" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.388445 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/584e1901-c470-4a3f-9461-7e97f4688399-config\") pod \"route-controller-manager-6576b87f9c-5lt8v\" (UID: \"584e1901-c470-4a3f-9461-7e97f4688399\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5lt8v" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.388446 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5489e784-b2d8-47f6-87b7-4c0b0786caaf-auth-proxy-config\") pod \"machine-approver-56656f9798-z7ndg\" (UID: \"5489e784-b2d8-47f6-87b7-4c0b0786caaf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z7ndg" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.388553 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/22cf157e-ce67-43f4-bbaf-577720728887-audit-dir\") pod \"apiserver-76f77b778f-gtxzr\" (UID: \"22cf157e-ce67-43f4-bbaf-577720728887\") " pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.396225 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.401616 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643-audit-dir\") pod \"apiserver-7bbb656c7d-jdbnn\" (UID: \"17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jdbnn" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.401723 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/22cf157e-ce67-43f4-bbaf-577720728887-node-pullsecrets\") pod \"apiserver-76f77b778f-gtxzr\" (UID: \"22cf157e-ce67-43f4-bbaf-577720728887\") " pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.402811 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-bp2hb\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.402815 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9380ccce-963f-42e6-b182-65e9bbf9f47e-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-kssj7\" (UID: \"9380ccce-963f-42e6-b182-65e9bbf9f47e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kssj7" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.403441 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9da2bc3-3945-4a02-8613-39338321441d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6lk2l\" (UID: \"c9da2bc3-3945-4a02-8613-39338321441d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lk2l" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.404585 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/22cf157e-ce67-43f4-bbaf-577720728887-image-import-ca\") pod \"apiserver-76f77b778f-gtxzr\" (UID: \"22cf157e-ce67-43f4-bbaf-577720728887\") " pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.404968 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bb760fa5-0dd1-4298-87de-d2cb1a0d3e0b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-spvgg\" (UID: \"bb760fa5-0dd1-4298-87de-d2cb1a0d3e0b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-spvgg" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.405824 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9a07607-7a0f-4436-a3bc-9bd2cbf61663-trusted-ca-bundle\") pod \"console-f9d7485db-ljwn7\" (UID: \"c9a07607-7a0f-4436-a3bc-9bd2cbf61663\") " pod="openshift-console/console-f9d7485db-ljwn7" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.410007 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/22cf157e-ce67-43f4-bbaf-577720728887-etcd-client\") pod \"apiserver-76f77b778f-gtxzr\" (UID: \"22cf157e-ce67-43f4-bbaf-577720728887\") " pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.410583 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/584e1901-c470-4a3f-9461-7e97f4688399-client-ca\") pod \"route-controller-manager-6576b87f9c-5lt8v\" (UID: \"584e1901-c470-4a3f-9461-7e97f4688399\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5lt8v" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.410933 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-xt8qf"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.411522 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4372e46e-19ca-487e-b2ee-1fea92a3197d-serving-cert\") pod \"controller-manager-879f6c89f-j4swj\" (UID: \"4372e46e-19ca-487e-b2ee-1fea92a3197d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-j4swj" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.413330 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.414273 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb760fa5-0dd1-4298-87de-d2cb1a0d3e0b-serving-cert\") pod \"openshift-config-operator-7777fb866f-spvgg\" (UID: \"bb760fa5-0dd1-4298-87de-d2cb1a0d3e0b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-spvgg" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.414369 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-v7zss"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.414471 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-xt8qf" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.418929 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9380ccce-963f-42e6-b182-65e9bbf9f47e-serving-cert\") pod \"authentication-operator-69f744f599-kssj7\" (UID: \"9380ccce-963f-42e6-b182-65e9bbf9f47e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kssj7" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.418989 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-72rck"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.419025 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7fjn"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.421259 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-bp2hb\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.424932 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4372e46e-19ca-487e-b2ee-1fea92a3197d-config\") pod \"controller-manager-879f6c89f-j4swj\" (UID: \"4372e46e-19ca-487e-b2ee-1fea92a3197d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-j4swj" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.425640 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c9a07607-7a0f-4436-a3bc-9bd2cbf61663-service-ca\") pod \"console-f9d7485db-ljwn7\" (UID: \"c9a07607-7a0f-4436-a3bc-9bd2cbf61663\") " pod="openshift-console/console-f9d7485db-ljwn7" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.425996 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rqvqs"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.426169 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rc4ml"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.426551 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/22cf157e-ce67-43f4-bbaf-577720728887-audit\") pod \"apiserver-76f77b778f-gtxzr\" (UID: \"22cf157e-ce67-43f4-bbaf-577720728887\") " pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.426596 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/026c0fd3-78be-48ef-81cd-ba63abb9197d-audit-dir\") pod \"oauth-openshift-558db77b4-bp2hb\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.427870 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643-audit-policies\") pod \"apiserver-7bbb656c7d-jdbnn\" (UID: \"17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jdbnn" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.428002 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/026c0fd3-78be-48ef-81cd-ba63abb9197d-audit-policies\") pod \"oauth-openshift-558db77b4-bp2hb\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.428131 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22cf157e-ce67-43f4-bbaf-577720728887-trusted-ca-bundle\") pod \"apiserver-76f77b778f-gtxzr\" (UID: \"22cf157e-ce67-43f4-bbaf-577720728887\") " pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.428365 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-bp2hb\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.428715 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22cf157e-ce67-43f4-bbaf-577720728887-serving-cert\") pod \"apiserver-76f77b778f-gtxzr\" (UID: \"22cf157e-ce67-43f4-bbaf-577720728887\") " pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.428919 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-bp2hb\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.429228 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9da2bc3-3945-4a02-8613-39338321441d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6lk2l\" (UID: \"c9da2bc3-3945-4a02-8613-39338321441d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lk2l" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.429278 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22cf157e-ce67-43f4-bbaf-577720728887-config\") pod \"apiserver-76f77b778f-gtxzr\" (UID: \"22cf157e-ce67-43f4-bbaf-577720728887\") " pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.429410 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-9wk4x"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.429446 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/22cf157e-ce67-43f4-bbaf-577720728887-encryption-config\") pod \"apiserver-76f77b778f-gtxzr\" (UID: \"22cf157e-ce67-43f4-bbaf-577720728887\") " pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.429451 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-bp2hb\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.430219 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-t2scc"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.430579 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643-encryption-config\") pod \"apiserver-7bbb656c7d-jdbnn\" (UID: \"17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jdbnn" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.430911 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9a07607-7a0f-4436-a3bc-9bd2cbf61663-console-serving-cert\") pod \"console-f9d7485db-ljwn7\" (UID: \"c9a07607-7a0f-4436-a3bc-9bd2cbf61663\") " pod="openshift-console/console-f9d7485db-ljwn7" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.431186 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643-etcd-client\") pod \"apiserver-7bbb656c7d-jdbnn\" (UID: \"17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jdbnn" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.431221 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j6s5s"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.433521 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-bp2hb\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.434633 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-bp2hb\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.434957 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lk2l"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.435210 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4372e46e-19ca-487e-b2ee-1fea92a3197d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-j4swj\" (UID: \"4372e46e-19ca-487e-b2ee-1fea92a3197d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-j4swj" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.435989 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-5cgnl"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.437288 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c9a07607-7a0f-4436-a3bc-9bd2cbf61663-console-oauth-config\") pod \"console-f9d7485db-ljwn7\" (UID: \"c9a07607-7a0f-4436-a3bc-9bd2cbf61663\") " pod="openshift-console/console-f9d7485db-ljwn7" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.438222 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-bp2hb\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.438286 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-69txp"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.439112 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-bp2hb\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.439161 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-hk9wh"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.439715 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-bp2hb\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.440150 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-948ch"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.442804 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5489e784-b2d8-47f6-87b7-4c0b0786caaf-machine-approver-tls\") pod \"machine-approver-56656f9798-z7ndg\" (UID: \"5489e784-b2d8-47f6-87b7-4c0b0786caaf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z7ndg" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.442867 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pcnqw"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.442894 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j4dj6"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.444007 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/584e1901-c470-4a3f-9461-7e97f4688399-serving-cert\") pod \"route-controller-manager-6576b87f9c-5lt8v\" (UID: \"584e1901-c470-4a3f-9461-7e97f4688399\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5lt8v" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.444735 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399730-77vnb"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.445486 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tpbjs"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.445926 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.447577 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643-serving-cert\") pod \"apiserver-7bbb656c7d-jdbnn\" (UID: \"17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jdbnn" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.448587 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fxzq9"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.449980 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmvs5"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.451590 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-mjfmp"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.453193 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-hqkkq"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.460439 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-wkkmt"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.465728 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-xt8qf"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.465767 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-wkkmt"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.465782 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-vng2k"] Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.466012 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.466014 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-wkkmt" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.467186 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-vng2k" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.479783 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhrdr\" (UniqueName: \"kubernetes.io/projected/c20b0775-ba72-4379-b5df-2ff35ffc2704-kube-api-access-fhrdr\") pod \"downloads-7954f5f757-mlcwl\" (UID: \"c20b0775-ba72-4379-b5df-2ff35ffc2704\") " pod="openshift-console/downloads-7954f5f757-mlcwl" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.485776 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.506313 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.527038 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.546595 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.565121 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.585317 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.605659 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.625289 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.645788 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.666663 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.686830 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.706729 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.726584 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.747102 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.767030 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.787826 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.806454 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.826000 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.846525 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.866608 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.886924 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.907055 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.926599 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.946814 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.966792 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 24 11:32:37 crc kubenswrapper[4789]: I1124 11:32:37.987305 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.006252 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.027108 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.046849 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.077875 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.086035 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.107809 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.121538 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.126601 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.146787 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.167261 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.186610 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.205879 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.227064 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.246216 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.266256 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.286116 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.304767 4789 request.go:700] Waited for 1.010593569s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpprof-cert&limit=500&resourceVersion=0 Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.307763 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.326689 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.346373 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.366264 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.386960 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.407056 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.426974 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.447277 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.466597 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.487156 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.507108 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.527761 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.546540 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.567010 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.586554 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.607299 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.625944 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.647699 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.687760 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.706869 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.726160 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.746529 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.766166 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.787268 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.806282 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.826350 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.846190 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.876232 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.887344 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.906781 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.926571 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.946564 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 24 11:32:38 crc kubenswrapper[4789]: I1124 11:32:38.966265 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.010690 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb58z\" (UniqueName: \"kubernetes.io/projected/026c0fd3-78be-48ef-81cd-ba63abb9197d-kube-api-access-xb58z\") pod \"oauth-openshift-558db77b4-bp2hb\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.033586 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgxv4\" (UniqueName: \"kubernetes.io/projected/4372e46e-19ca-487e-b2ee-1fea92a3197d-kube-api-access-zgxv4\") pod \"controller-manager-879f6c89f-j4swj\" (UID: \"4372e46e-19ca-487e-b2ee-1fea92a3197d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-j4swj" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.053011 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thlhn\" (UniqueName: \"kubernetes.io/projected/5489e784-b2d8-47f6-87b7-4c0b0786caaf-kube-api-access-thlhn\") pod \"machine-approver-56656f9798-z7ndg\" (UID: \"5489e784-b2d8-47f6-87b7-4c0b0786caaf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z7ndg" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.068412 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.070310 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.073157 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57bvz\" (UniqueName: \"kubernetes.io/projected/584e1901-c470-4a3f-9461-7e97f4688399-kube-api-access-57bvz\") pod \"route-controller-manager-6576b87f9c-5lt8v\" (UID: \"584e1901-c470-4a3f-9461-7e97f4688399\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5lt8v" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.086380 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.107397 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.161264 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb6q4\" (UniqueName: \"kubernetes.io/projected/c9da2bc3-3945-4a02-8613-39338321441d-kube-api-access-vb6q4\") pod \"openshift-controller-manager-operator-756b6f6bc6-6lk2l\" (UID: \"c9da2bc3-3945-4a02-8613-39338321441d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lk2l" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.165312 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2v5m\" (UniqueName: \"kubernetes.io/projected/9380ccce-963f-42e6-b182-65e9bbf9f47e-kube-api-access-k2v5m\") pod \"authentication-operator-69f744f599-kssj7\" (UID: \"9380ccce-963f-42e6-b182-65e9bbf9f47e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kssj7" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.192365 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6klb\" (UniqueName: \"kubernetes.io/projected/c9a07607-7a0f-4436-a3bc-9bd2cbf61663-kube-api-access-r6klb\") pod \"console-f9d7485db-ljwn7\" (UID: \"c9a07607-7a0f-4436-a3bc-9bd2cbf61663\") " pod="openshift-console/console-f9d7485db-ljwn7" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.215365 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j6pl\" (UniqueName: \"kubernetes.io/projected/17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643-kube-api-access-4j6pl\") pod \"apiserver-7bbb656c7d-jdbnn\" (UID: \"17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jdbnn" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.246246 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z7ndg" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.247046 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5lt8v" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.247363 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jdbnn" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.269084 4789 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.272144 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wj45\" (UniqueName: \"kubernetes.io/projected/22cf157e-ce67-43f4-bbaf-577720728887-kube-api-access-7wj45\") pod \"apiserver-76f77b778f-gtxzr\" (UID: \"22cf157e-ce67-43f4-bbaf-577720728887\") " pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.273050 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg7b8\" (UniqueName: \"kubernetes.io/projected/bb760fa5-0dd1-4298-87de-d2cb1a0d3e0b-kube-api-access-xg7b8\") pod \"openshift-config-operator-7777fb866f-spvgg\" (UID: \"bb760fa5-0dd1-4298-87de-d2cb1a0d3e0b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-spvgg" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.287127 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.289947 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.303849 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shhl2\" (UniqueName: \"kubernetes.io/projected/0f4736c2-dfae-4e07-ab51-55978257a8bf-kube-api-access-shhl2\") pod \"cluster-samples-operator-665b6dd947-svr79\" (UID: \"0f4736c2-dfae-4e07-ab51-55978257a8bf\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-svr79" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.303904 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a1856d7-6ca5-475f-8476-b2325d595447-config\") pod \"etcd-operator-b45778765-v7zss\" (UID: \"4a1856d7-6ca5-475f-8476-b2325d595447\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7zss" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.303932 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/51c0ab73-bbc1-4f70-afa7-059dec256973-installation-pull-secrets\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.303955 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1318a733-4e15-40bc-a40c-da929809e25c-serving-cert\") pod \"console-operator-58897d9998-t2scc\" (UID: \"1318a733-4e15-40bc-a40c-da929809e25c\") " pod="openshift-console-operator/console-operator-58897d9998-t2scc" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.303977 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2000def-4dbe-4976-a901-111027907fa5-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-28skr\" (UID: \"b2000def-4dbe-4976-a901-111027907fa5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-28skr" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.303998 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbdp4\" (UniqueName: \"kubernetes.io/projected/b2000def-4dbe-4976-a901-111027907fa5-kube-api-access-nbdp4\") pod \"openshift-apiserver-operator-796bbdcf4f-28skr\" (UID: \"b2000def-4dbe-4976-a901-111027907fa5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-28skr" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.304918 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1318a733-4e15-40bc-a40c-da929809e25c-trusted-ca\") pod \"console-operator-58897d9998-t2scc\" (UID: \"1318a733-4e15-40bc-a40c-da929809e25c\") " pod="openshift-console-operator/console-operator-58897d9998-t2scc" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.304951 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ssdf\" (UniqueName: \"kubernetes.io/projected/4a1856d7-6ca5-475f-8476-b2325d595447-kube-api-access-6ssdf\") pod \"etcd-operator-b45778765-v7zss\" (UID: \"4a1856d7-6ca5-475f-8476-b2325d595447\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7zss" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.305006 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0f4736c2-dfae-4e07-ab51-55978257a8bf-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-svr79\" (UID: \"0f4736c2-dfae-4e07-ab51-55978257a8bf\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-svr79" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.305074 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d90e94ec-ea22-4ba7-a0b0-7b636dcccf9c-config\") pod \"machine-api-operator-5694c8668f-klw64\" (UID: \"d90e94ec-ea22-4ba7-a0b0-7b636dcccf9c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-klw64" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.305144 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/51c0ab73-bbc1-4f70-afa7-059dec256973-registry-certificates\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.305193 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97dca1c4-6dff-48cd-8e41-c41d0c850fda-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-rc4ml\" (UID: \"97dca1c4-6dff-48cd-8e41-c41d0c850fda\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rc4ml" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.305356 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4a1856d7-6ca5-475f-8476-b2325d595447-etcd-ca\") pod \"etcd-operator-b45778765-v7zss\" (UID: \"4a1856d7-6ca5-475f-8476-b2325d595447\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7zss" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.305406 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4a1856d7-6ca5-475f-8476-b2325d595447-etcd-client\") pod \"etcd-operator-b45778765-v7zss\" (UID: \"4a1856d7-6ca5-475f-8476-b2325d595447\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7zss" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.305439 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/97dca1c4-6dff-48cd-8e41-c41d0c850fda-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-rc4ml\" (UID: \"97dca1c4-6dff-48cd-8e41-c41d0c850fda\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rc4ml" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.305503 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2000def-4dbe-4976-a901-111027907fa5-config\") pod \"openshift-apiserver-operator-796bbdcf4f-28skr\" (UID: \"b2000def-4dbe-4976-a901-111027907fa5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-28skr" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.305537 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1318a733-4e15-40bc-a40c-da929809e25c-config\") pod \"console-operator-58897d9998-t2scc\" (UID: \"1318a733-4e15-40bc-a40c-da929809e25c\") " pod="openshift-console-operator/console-operator-58897d9998-t2scc" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.305541 4789 request.go:700] Waited for 1.838571104s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.305629 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/51c0ab73-bbc1-4f70-afa7-059dec256973-registry-tls\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.305652 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d90e94ec-ea22-4ba7-a0b0-7b636dcccf9c-images\") pod \"machine-api-operator-5694c8668f-klw64\" (UID: \"d90e94ec-ea22-4ba7-a0b0-7b636dcccf9c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-klw64" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.305701 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d90e94ec-ea22-4ba7-a0b0-7b636dcccf9c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-klw64\" (UID: \"d90e94ec-ea22-4ba7-a0b0-7b636dcccf9c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-klw64" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.305768 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt62b\" (UniqueName: \"kubernetes.io/projected/d90e94ec-ea22-4ba7-a0b0-7b636dcccf9c-kube-api-access-mt62b\") pod \"machine-api-operator-5694c8668f-klw64\" (UID: \"d90e94ec-ea22-4ba7-a0b0-7b636dcccf9c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-klw64" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.306876 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97dca1c4-6dff-48cd-8e41-c41d0c850fda-config\") pod \"kube-apiserver-operator-766d6c64bb-rc4ml\" (UID: \"97dca1c4-6dff-48cd-8e41-c41d0c850fda\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rc4ml" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.306968 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4a1856d7-6ca5-475f-8476-b2325d595447-etcd-service-ca\") pod \"etcd-operator-b45778765-v7zss\" (UID: \"4a1856d7-6ca5-475f-8476-b2325d595447\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7zss" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.307012 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/51c0ab73-bbc1-4f70-afa7-059dec256973-ca-trust-extracted\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.307255 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gnc5\" (UniqueName: \"kubernetes.io/projected/51c0ab73-bbc1-4f70-afa7-059dec256973-kube-api-access-2gnc5\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.307346 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krn5w\" (UniqueName: \"kubernetes.io/projected/1318a733-4e15-40bc-a40c-da929809e25c-kube-api-access-krn5w\") pod \"console-operator-58897d9998-t2scc\" (UID: \"1318a733-4e15-40bc-a40c-da929809e25c\") " pod="openshift-console-operator/console-operator-58897d9998-t2scc" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.309709 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.309774 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/51c0ab73-bbc1-4f70-afa7-059dec256973-bound-sa-token\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.309870 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a1856d7-6ca5-475f-8476-b2325d595447-serving-cert\") pod \"etcd-operator-b45778765-v7zss\" (UID: \"4a1856d7-6ca5-475f-8476-b2325d595447\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7zss" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.309915 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/51c0ab73-bbc1-4f70-afa7-059dec256973-trusted-ca\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.310968 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 24 11:32:39 crc kubenswrapper[4789]: E1124 11:32:39.311867 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:39.811830373 +0000 UTC m=+142.394301832 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.322948 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-j4swj" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.326495 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.346126 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.348758 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-kssj7" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.368209 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lk2l" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.374940 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.376575 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-spvgg" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.394494 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ljwn7" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.395755 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-bp2hb"] Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.407412 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhrdr\" (UniqueName: \"kubernetes.io/projected/c20b0775-ba72-4379-b5df-2ff35ffc2704-kube-api-access-fhrdr\") pod \"downloads-7954f5f757-mlcwl\" (UID: \"c20b0775-ba72-4379-b5df-2ff35ffc2704\") " pod="openshift-console/downloads-7954f5f757-mlcwl" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.412408 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.412573 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d90e94ec-ea22-4ba7-a0b0-7b636dcccf9c-config\") pod \"machine-api-operator-5694c8668f-klw64\" (UID: \"d90e94ec-ea22-4ba7-a0b0-7b636dcccf9c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-klw64" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.412603 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98d60ae9-773d-4bb7-8dd6-5de5b42bbcc9-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rqvqs\" (UID: \"98d60ae9-773d-4bb7-8dd6-5de5b42bbcc9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rqvqs" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.412630 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq9vc\" (UniqueName: \"kubernetes.io/projected/7d1b1c88-f1c8-4795-9fed-f3424b1355fa-kube-api-access-wq9vc\") pod \"olm-operator-6b444d44fb-tpbjs\" (UID: \"7d1b1c88-f1c8-4795-9fed-f3424b1355fa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tpbjs" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.412647 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/51c0ab73-bbc1-4f70-afa7-059dec256973-registry-certificates\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.412663 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b-metrics-certs\") pod \"router-default-5444994796-h8dsm\" (UID: \"1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b\") " pod="openshift-ingress/router-default-5444994796-h8dsm" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.412678 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clcpt\" (UniqueName: \"kubernetes.io/projected/b2fe1c31-7dc8-4f55-b853-15de35052479-kube-api-access-clcpt\") pod \"package-server-manager-789f6589d5-pcnqw\" (UID: \"b2fe1c31-7dc8-4f55-b853-15de35052479\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pcnqw" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.412693 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/57a4e2c7-255f-466f-a75d-3517b390ad06-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-948ch\" (UID: \"57a4e2c7-255f-466f-a75d-3517b390ad06\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-948ch" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.412715 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/04826e9c-2f6b-4215-b334-c52ee5f5e150-metrics-tls\") pod \"dns-operator-744455d44c-k4s28\" (UID: \"04826e9c-2f6b-4215-b334-c52ee5f5e150\") " pod="openshift-dns-operator/dns-operator-744455d44c-k4s28" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.412729 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2e152bba-2c0e-4f46-8bc9-279649243e6c-socket-dir\") pod \"csi-hostpathplugin-wkkmt\" (UID: \"2e152bba-2c0e-4f46-8bc9-279649243e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-wkkmt" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.412745 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-495jl\" (UniqueName: \"kubernetes.io/projected/9027b945-8ba9-4e3c-a6ee-21271a3e30d1-kube-api-access-495jl\") pod \"service-ca-operator-777779d784-72rck\" (UID: \"9027b945-8ba9-4e3c-a6ee-21271a3e30d1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-72rck" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.412769 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4a1856d7-6ca5-475f-8476-b2325d595447-etcd-ca\") pod \"etcd-operator-b45778765-v7zss\" (UID: \"4a1856d7-6ca5-475f-8476-b2325d595447\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7zss" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.412783 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2000def-4dbe-4976-a901-111027907fa5-config\") pod \"openshift-apiserver-operator-796bbdcf4f-28skr\" (UID: \"b2000def-4dbe-4976-a901-111027907fa5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-28skr" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.412802 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c02b858d-680d-415a-be28-5f382cdaaac1-proxy-tls\") pod \"machine-config-operator-74547568cd-69txp\" (UID: \"c02b858d-680d-415a-be28-5f382cdaaac1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-69txp" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.412818 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88gsn\" (UniqueName: \"kubernetes.io/projected/c02b858d-680d-415a-be28-5f382cdaaac1-kube-api-access-88gsn\") pod \"machine-config-operator-74547568cd-69txp\" (UID: \"c02b858d-680d-415a-be28-5f382cdaaac1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-69txp" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.412835 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1318a733-4e15-40bc-a40c-da929809e25c-config\") pod \"console-operator-58897d9998-t2scc\" (UID: \"1318a733-4e15-40bc-a40c-da929809e25c\") " pod="openshift-console-operator/console-operator-58897d9998-t2scc" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.412848 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57a4e2c7-255f-466f-a75d-3517b390ad06-config\") pod \"kube-controller-manager-operator-78b949d7b-948ch\" (UID: \"57a4e2c7-255f-466f-a75d-3517b390ad06\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-948ch" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.412865 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49gb5\" (UniqueName: \"kubernetes.io/projected/9235e424-26c2-4a58-8347-6eeabd8fc282-kube-api-access-49gb5\") pod \"ingress-operator-5b745b69d9-hqkkq\" (UID: \"9235e424-26c2-4a58-8347-6eeabd8fc282\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hqkkq" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.412882 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d90e94ec-ea22-4ba7-a0b0-7b636dcccf9c-images\") pod \"machine-api-operator-5694c8668f-klw64\" (UID: \"d90e94ec-ea22-4ba7-a0b0-7b636dcccf9c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-klw64" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.412900 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9235e424-26c2-4a58-8347-6eeabd8fc282-bound-sa-token\") pod \"ingress-operator-5b745b69d9-hqkkq\" (UID: \"9235e424-26c2-4a58-8347-6eeabd8fc282\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hqkkq" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.412915 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/48ee479a-ea6a-4831-858a-1cdfaca6762c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xf9qh\" (UID: \"48ee479a-ea6a-4831-858a-1cdfaca6762c\") " pod="openshift-marketplace/marketplace-operator-79b997595-xf9qh" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.412931 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cbf039a2-0b1a-4284-9e4f-30178313bb09-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-g7l4l\" (UID: \"cbf039a2-0b1a-4284-9e4f-30178313bb09\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g7l4l" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.412946 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9027b945-8ba9-4e3c-a6ee-21271a3e30d1-serving-cert\") pod \"service-ca-operator-777779d784-72rck\" (UID: \"9027b945-8ba9-4e3c-a6ee-21271a3e30d1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-72rck" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.412960 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwj7b\" (UniqueName: \"kubernetes.io/projected/88ed3262-9f36-4edf-ace6-4f739dcb8070-kube-api-access-mwj7b\") pod \"machine-config-server-vng2k\" (UID: \"88ed3262-9f36-4edf-ace6-4f739dcb8070\") " pod="openshift-machine-config-operator/machine-config-server-vng2k" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.412976 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxp2z\" (UniqueName: \"kubernetes.io/projected/7f023c49-9ed6-4ed3-a6ce-560c3fcb3a58-kube-api-access-bxp2z\") pod \"cluster-image-registry-operator-dc59b4c8b-rmvs5\" (UID: \"7f023c49-9ed6-4ed3-a6ce-560c3fcb3a58\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmvs5" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.412991 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9aeda001-70e0-4e29-b122-e75d98325c1d-profile-collector-cert\") pod \"catalog-operator-68c6474976-j6s5s\" (UID: \"9aeda001-70e0-4e29-b122-e75d98325c1d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j6s5s" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.413009 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57a4e2c7-255f-466f-a75d-3517b390ad06-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-948ch\" (UID: \"57a4e2c7-255f-466f-a75d-3517b390ad06\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-948ch" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.413031 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krn5w\" (UniqueName: \"kubernetes.io/projected/1318a733-4e15-40bc-a40c-da929809e25c-kube-api-access-krn5w\") pod \"console-operator-58897d9998-t2scc\" (UID: \"1318a733-4e15-40bc-a40c-da929809e25c\") " pod="openshift-console-operator/console-operator-58897d9998-t2scc" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.413048 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7d1b1c88-f1c8-4795-9fed-f3424b1355fa-srv-cert\") pod \"olm-operator-6b444d44fb-tpbjs\" (UID: \"7d1b1c88-f1c8-4795-9fed-f3424b1355fa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tpbjs" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.413063 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh2ct\" (UniqueName: \"kubernetes.io/projected/9aeda001-70e0-4e29-b122-e75d98325c1d-kube-api-access-qh2ct\") pod \"catalog-operator-68c6474976-j6s5s\" (UID: \"9aeda001-70e0-4e29-b122-e75d98325c1d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j6s5s" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.413078 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98d60ae9-773d-4bb7-8dd6-5de5b42bbcc9-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rqvqs\" (UID: \"98d60ae9-773d-4bb7-8dd6-5de5b42bbcc9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rqvqs" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.413093 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/153e2e1a-8390-42f3-b959-d3607dfef848-metrics-tls\") pod \"dns-default-xt8qf\" (UID: \"153e2e1a-8390-42f3-b959-d3607dfef848\") " pod="openshift-dns/dns-default-xt8qf" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.413115 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b-stats-auth\") pod \"router-default-5444994796-h8dsm\" (UID: \"1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b\") " pod="openshift-ingress/router-default-5444994796-h8dsm" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.413128 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2e152bba-2c0e-4f46-8bc9-279649243e6c-mountpoint-dir\") pod \"csi-hostpathplugin-wkkmt\" (UID: \"2e152bba-2c0e-4f46-8bc9-279649243e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-wkkmt" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.413144 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a1856d7-6ca5-475f-8476-b2325d595447-serving-cert\") pod \"etcd-operator-b45778765-v7zss\" (UID: \"4a1856d7-6ca5-475f-8476-b2325d595447\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7zss" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.413159 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/07688099-4b3c-4fae-9eba-b3d7308cf8e6-apiservice-cert\") pod \"packageserver-d55dfcdfc-j4dj6\" (UID: \"07688099-4b3c-4fae-9eba-b3d7308cf8e6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j4dj6" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.413172 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2e152bba-2c0e-4f46-8bc9-279649243e6c-csi-data-dir\") pod \"csi-hostpathplugin-wkkmt\" (UID: \"2e152bba-2c0e-4f46-8bc9-279649243e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-wkkmt" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.413187 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlq5j\" (UniqueName: \"kubernetes.io/projected/666ba159-709e-4b10-8d3d-6a7ae785f61f-kube-api-access-tlq5j\") pod \"ingress-canary-hk9wh\" (UID: \"666ba159-709e-4b10-8d3d-6a7ae785f61f\") " pod="openshift-ingress-canary/ingress-canary-hk9wh" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.413203 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dec7c435-8991-4348-b471-dfc3c15a0001-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-x7fjn\" (UID: \"dec7c435-8991-4348-b471-dfc3c15a0001\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7fjn" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.413218 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a1856d7-6ca5-475f-8476-b2325d595447-config\") pod \"etcd-operator-b45778765-v7zss\" (UID: \"4a1856d7-6ca5-475f-8476-b2325d595447\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7zss" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.413233 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9235e424-26c2-4a58-8347-6eeabd8fc282-trusted-ca\") pod \"ingress-operator-5b745b69d9-hqkkq\" (UID: \"9235e424-26c2-4a58-8347-6eeabd8fc282\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hqkkq" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.413247 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2e152bba-2c0e-4f46-8bc9-279649243e6c-registration-dir\") pod \"csi-hostpathplugin-wkkmt\" (UID: \"2e152bba-2c0e-4f46-8bc9-279649243e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-wkkmt" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.413272 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2e152bba-2c0e-4f46-8bc9-279649243e6c-plugins-dir\") pod \"csi-hostpathplugin-wkkmt\" (UID: \"2e152bba-2c0e-4f46-8bc9-279649243e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-wkkmt" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.413293 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/51c0ab73-bbc1-4f70-afa7-059dec256973-installation-pull-secrets\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.413308 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9027b945-8ba9-4e3c-a6ee-21271a3e30d1-config\") pod \"service-ca-operator-777779d784-72rck\" (UID: \"9027b945-8ba9-4e3c-a6ee-21271a3e30d1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-72rck" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.413331 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1318a733-4e15-40bc-a40c-da929809e25c-serving-cert\") pod \"console-operator-58897d9998-t2scc\" (UID: \"1318a733-4e15-40bc-a40c-da929809e25c\") " pod="openshift-console-operator/console-operator-58897d9998-t2scc" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.413349 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2000def-4dbe-4976-a901-111027907fa5-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-28skr\" (UID: \"b2000def-4dbe-4976-a901-111027907fa5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-28skr" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.413418 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hx5l\" (UniqueName: \"kubernetes.io/projected/2cb92340-d666-48d7-8b9e-5f25c48b546f-kube-api-access-2hx5l\") pod \"multus-admission-controller-857f4d67dd-5cgnl\" (UID: \"2cb92340-d666-48d7-8b9e-5f25c48b546f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5cgnl" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.414334 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ff878878-c8f6-420d-b564-a98660220eba-signing-key\") pod \"service-ca-9c57cc56f-9wk4x\" (UID: \"ff878878-c8f6-420d-b564-a98660220eba\") " pod="openshift-service-ca/service-ca-9c57cc56f-9wk4x" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.414368 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d90e94ec-ea22-4ba7-a0b0-7b636dcccf9c-images\") pod \"machine-api-operator-5694c8668f-klw64\" (UID: \"d90e94ec-ea22-4ba7-a0b0-7b636dcccf9c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-klw64" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.414370 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1318a733-4e15-40bc-a40c-da929809e25c-trusted-ca\") pod \"console-operator-58897d9998-t2scc\" (UID: \"1318a733-4e15-40bc-a40c-da929809e25c\") " pod="openshift-console-operator/console-operator-58897d9998-t2scc" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.414412 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzjks\" (UniqueName: \"kubernetes.io/projected/dec7c435-8991-4348-b471-dfc3c15a0001-kube-api-access-vzjks\") pod \"kube-storage-version-migrator-operator-b67b599dd-x7fjn\" (UID: \"dec7c435-8991-4348-b471-dfc3c15a0001\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7fjn" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.414416 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2000def-4dbe-4976-a901-111027907fa5-config\") pod \"openshift-apiserver-operator-796bbdcf4f-28skr\" (UID: \"b2000def-4dbe-4976-a901-111027907fa5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-28skr" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.414430 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/07688099-4b3c-4fae-9eba-b3d7308cf8e6-tmpfs\") pod \"packageserver-d55dfcdfc-j4dj6\" (UID: \"07688099-4b3c-4fae-9eba-b3d7308cf8e6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j4dj6" Nov 24 11:32:39 crc kubenswrapper[4789]: E1124 11:32:39.414527 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:39.91450648 +0000 UTC m=+142.496977859 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.414561 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c51acce1-f5f7-44d8-aadf-ae468cf2e29b-secret-volume\") pod \"collect-profiles-29399730-77vnb\" (UID: \"c51acce1-f5f7-44d8-aadf-ae468cf2e29b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-77vnb" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.414589 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b2fe1c31-7dc8-4f55-b853-15de35052479-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-pcnqw\" (UID: \"b2fe1c31-7dc8-4f55-b853-15de35052479\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pcnqw" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.414616 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlpjj\" (UniqueName: \"kubernetes.io/projected/c51acce1-f5f7-44d8-aadf-ae468cf2e29b-kube-api-access-jlpjj\") pod \"collect-profiles-29399730-77vnb\" (UID: \"c51acce1-f5f7-44d8-aadf-ae468cf2e29b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-77vnb" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.414641 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b-service-ca-bundle\") pod \"router-default-5444994796-h8dsm\" (UID: \"1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b\") " pod="openshift-ingress/router-default-5444994796-h8dsm" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.414663 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2xn5\" (UniqueName: \"kubernetes.io/projected/153e2e1a-8390-42f3-b959-d3607dfef848-kube-api-access-k2xn5\") pod \"dns-default-xt8qf\" (UID: \"153e2e1a-8390-42f3-b959-d3607dfef848\") " pod="openshift-dns/dns-default-xt8qf" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.414679 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dec7c435-8991-4348-b471-dfc3c15a0001-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-x7fjn\" (UID: \"dec7c435-8991-4348-b471-dfc3c15a0001\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7fjn" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.414696 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b-default-certificate\") pod \"router-default-5444994796-h8dsm\" (UID: \"1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b\") " pod="openshift-ingress/router-default-5444994796-h8dsm" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.414731 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97dca1c4-6dff-48cd-8e41-c41d0c850fda-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-rc4ml\" (UID: \"97dca1c4-6dff-48cd-8e41-c41d0c850fda\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rc4ml" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.415364 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/51c0ab73-bbc1-4f70-afa7-059dec256973-registry-certificates\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.415431 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a1856d7-6ca5-475f-8476-b2325d595447-config\") pod \"etcd-operator-b45778765-v7zss\" (UID: \"4a1856d7-6ca5-475f-8476-b2325d595447\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7zss" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.415447 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1318a733-4e15-40bc-a40c-da929809e25c-trusted-ca\") pod \"console-operator-58897d9998-t2scc\" (UID: \"1318a733-4e15-40bc-a40c-da929809e25c\") " pod="openshift-console-operator/console-operator-58897d9998-t2scc" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.416339 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2000def-4dbe-4976-a901-111027907fa5-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-28skr\" (UID: \"b2000def-4dbe-4976-a901-111027907fa5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-28skr" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.417020 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1318a733-4e15-40bc-a40c-da929809e25c-config\") pod \"console-operator-58897d9998-t2scc\" (UID: \"1318a733-4e15-40bc-a40c-da929809e25c\") " pod="openshift-console-operator/console-operator-58897d9998-t2scc" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.417250 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-mlcwl" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.417350 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ff878878-c8f6-420d-b564-a98660220eba-signing-cabundle\") pod \"service-ca-9c57cc56f-9wk4x\" (UID: \"ff878878-c8f6-420d-b564-a98660220eba\") " pod="openshift-service-ca/service-ca-9c57cc56f-9wk4x" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.417935 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4a1856d7-6ca5-475f-8476-b2325d595447-etcd-client\") pod \"etcd-operator-b45778765-v7zss\" (UID: \"4a1856d7-6ca5-475f-8476-b2325d595447\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7zss" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.418000 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/97dca1c4-6dff-48cd-8e41-c41d0c850fda-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-rc4ml\" (UID: \"97dca1c4-6dff-48cd-8e41-c41d0c850fda\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rc4ml" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.418032 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9235e424-26c2-4a58-8347-6eeabd8fc282-metrics-tls\") pod \"ingress-operator-5b745b69d9-hqkkq\" (UID: \"9235e424-26c2-4a58-8347-6eeabd8fc282\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hqkkq" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.418043 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d90e94ec-ea22-4ba7-a0b0-7b636dcccf9c-config\") pod \"machine-api-operator-5694c8668f-klw64\" (UID: \"d90e94ec-ea22-4ba7-a0b0-7b636dcccf9c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-klw64" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.418083 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7f023c49-9ed6-4ed3-a6ce-560c3fcb3a58-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rmvs5\" (UID: \"7f023c49-9ed6-4ed3-a6ce-560c3fcb3a58\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmvs5" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.418195 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/88ed3262-9f36-4edf-ace6-4f739dcb8070-certs\") pod \"machine-config-server-vng2k\" (UID: \"88ed3262-9f36-4edf-ace6-4f739dcb8070\") " pod="openshift-machine-config-operator/machine-config-server-vng2k" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.418229 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7f023c49-9ed6-4ed3-a6ce-560c3fcb3a58-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rmvs5\" (UID: \"7f023c49-9ed6-4ed3-a6ce-560c3fcb3a58\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmvs5" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.418425 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2cb92340-d666-48d7-8b9e-5f25c48b546f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-5cgnl\" (UID: \"2cb92340-d666-48d7-8b9e-5f25c48b546f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5cgnl" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.418666 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xbvf\" (UniqueName: \"kubernetes.io/projected/48ee479a-ea6a-4831-858a-1cdfaca6762c-kube-api-access-4xbvf\") pod \"marketplace-operator-79b997595-xf9qh\" (UID: \"48ee479a-ea6a-4831-858a-1cdfaca6762c\") " pod="openshift-marketplace/marketplace-operator-79b997595-xf9qh" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.418710 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/07688099-4b3c-4fae-9eba-b3d7308cf8e6-webhook-cert\") pod \"packageserver-d55dfcdfc-j4dj6\" (UID: \"07688099-4b3c-4fae-9eba-b3d7308cf8e6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j4dj6" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.418823 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/51c0ab73-bbc1-4f70-afa7-059dec256973-registry-tls\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.418858 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n48g\" (UniqueName: \"kubernetes.io/projected/a6a654d4-4e05-4848-ab14-624f78b93cfa-kube-api-access-4n48g\") pod \"control-plane-machine-set-operator-78cbb6b69f-fxzq9\" (UID: \"a6a654d4-4e05-4848-ab14-624f78b93cfa\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fxzq9" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.418875 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx4wh\" (UniqueName: \"kubernetes.io/projected/1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b-kube-api-access-sx4wh\") pod \"router-default-5444994796-h8dsm\" (UID: \"1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b\") " pod="openshift-ingress/router-default-5444994796-h8dsm" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.418904 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d90e94ec-ea22-4ba7-a0b0-7b636dcccf9c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-klw64\" (UID: \"d90e94ec-ea22-4ba7-a0b0-7b636dcccf9c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-klw64" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.418935 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98d60ae9-773d-4bb7-8dd6-5de5b42bbcc9-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rqvqs\" (UID: \"98d60ae9-773d-4bb7-8dd6-5de5b42bbcc9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rqvqs" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.418953 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wnrg\" (UniqueName: \"kubernetes.io/projected/04826e9c-2f6b-4215-b334-c52ee5f5e150-kube-api-access-6wnrg\") pod \"dns-operator-744455d44c-k4s28\" (UID: \"04826e9c-2f6b-4215-b334-c52ee5f5e150\") " pod="openshift-dns-operator/dns-operator-744455d44c-k4s28" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.418969 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzssz\" (UniqueName: \"kubernetes.io/projected/07688099-4b3c-4fae-9eba-b3d7308cf8e6-kube-api-access-rzssz\") pod \"packageserver-d55dfcdfc-j4dj6\" (UID: \"07688099-4b3c-4fae-9eba-b3d7308cf8e6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j4dj6" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.419001 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt62b\" (UniqueName: \"kubernetes.io/projected/d90e94ec-ea22-4ba7-a0b0-7b636dcccf9c-kube-api-access-mt62b\") pod \"machine-api-operator-5694c8668f-klw64\" (UID: \"d90e94ec-ea22-4ba7-a0b0-7b636dcccf9c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-klw64" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.419018 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97dca1c4-6dff-48cd-8e41-c41d0c850fda-config\") pod \"kube-apiserver-operator-766d6c64bb-rc4ml\" (UID: \"97dca1c4-6dff-48cd-8e41-c41d0c850fda\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rc4ml" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.419039 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c51acce1-f5f7-44d8-aadf-ae468cf2e29b-config-volume\") pod \"collect-profiles-29399730-77vnb\" (UID: \"c51acce1-f5f7-44d8-aadf-ae468cf2e29b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-77vnb" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.419057 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twsv9\" (UniqueName: \"kubernetes.io/projected/2e152bba-2c0e-4f46-8bc9-279649243e6c-kube-api-access-twsv9\") pod \"csi-hostpathplugin-wkkmt\" (UID: \"2e152bba-2c0e-4f46-8bc9-279649243e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-wkkmt" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.419109 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwbc6\" (UniqueName: \"kubernetes.io/projected/ff878878-c8f6-420d-b564-a98660220eba-kube-api-access-fwbc6\") pod \"service-ca-9c57cc56f-9wk4x\" (UID: \"ff878878-c8f6-420d-b564-a98660220eba\") " pod="openshift-service-ca/service-ca-9c57cc56f-9wk4x" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.419127 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/51c0ab73-bbc1-4f70-afa7-059dec256973-ca-trust-extracted\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.419128 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/51c0ab73-bbc1-4f70-afa7-059dec256973-installation-pull-secrets\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.419144 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4a1856d7-6ca5-475f-8476-b2325d595447-etcd-service-ca\") pod \"etcd-operator-b45778765-v7zss\" (UID: \"4a1856d7-6ca5-475f-8476-b2325d595447\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7zss" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.419162 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c02b858d-680d-415a-be28-5f382cdaaac1-auth-proxy-config\") pod \"machine-config-operator-74547568cd-69txp\" (UID: \"c02b858d-680d-415a-be28-5f382cdaaac1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-69txp" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.419857 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97dca1c4-6dff-48cd-8e41-c41d0c850fda-config\") pod \"kube-apiserver-operator-766d6c64bb-rc4ml\" (UID: \"97dca1c4-6dff-48cd-8e41-c41d0c850fda\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rc4ml" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.419918 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4a1856d7-6ca5-475f-8476-b2325d595447-etcd-ca\") pod \"etcd-operator-b45778765-v7zss\" (UID: \"4a1856d7-6ca5-475f-8476-b2325d595447\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7zss" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.420115 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/51c0ab73-bbc1-4f70-afa7-059dec256973-ca-trust-extracted\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.420166 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/88ed3262-9f36-4edf-ace6-4f739dcb8070-node-bootstrap-token\") pod \"machine-config-server-vng2k\" (UID: \"88ed3262-9f36-4edf-ace6-4f739dcb8070\") " pod="openshift-machine-config-operator/machine-config-server-vng2k" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.420448 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4a1856d7-6ca5-475f-8476-b2325d595447-etcd-service-ca\") pod \"etcd-operator-b45778765-v7zss\" (UID: \"4a1856d7-6ca5-475f-8476-b2325d595447\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7zss" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.420560 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkvpv\" (UniqueName: \"kubernetes.io/projected/cbf039a2-0b1a-4284-9e4f-30178313bb09-kube-api-access-vkvpv\") pod \"machine-config-controller-84d6567774-g7l4l\" (UID: \"cbf039a2-0b1a-4284-9e4f-30178313bb09\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g7l4l" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.420621 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gnc5\" (UniqueName: \"kubernetes.io/projected/51c0ab73-bbc1-4f70-afa7-059dec256973-kube-api-access-2gnc5\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.420762 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/51c0ab73-bbc1-4f70-afa7-059dec256973-bound-sa-token\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.420800 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9aeda001-70e0-4e29-b122-e75d98325c1d-srv-cert\") pod \"catalog-operator-68c6474976-j6s5s\" (UID: \"9aeda001-70e0-4e29-b122-e75d98325c1d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j6s5s" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.420875 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7f023c49-9ed6-4ed3-a6ce-560c3fcb3a58-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rmvs5\" (UID: \"7f023c49-9ed6-4ed3-a6ce-560c3fcb3a58\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmvs5" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.420910 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmxxv\" (UniqueName: \"kubernetes.io/projected/43b17f72-4406-4ea9-99b5-6683ee119e5a-kube-api-access-gmxxv\") pod \"migrator-59844c95c7-mjfmp\" (UID: \"43b17f72-4406-4ea9-99b5-6683ee119e5a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mjfmp" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.420964 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/51c0ab73-bbc1-4f70-afa7-059dec256973-trusted-ca\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.421744 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/153e2e1a-8390-42f3-b959-d3607dfef848-config-volume\") pod \"dns-default-xt8qf\" (UID: \"153e2e1a-8390-42f3-b959-d3607dfef848\") " pod="openshift-dns/dns-default-xt8qf" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.421799 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shhl2\" (UniqueName: \"kubernetes.io/projected/0f4736c2-dfae-4e07-ab51-55978257a8bf-kube-api-access-shhl2\") pod \"cluster-samples-operator-665b6dd947-svr79\" (UID: \"0f4736c2-dfae-4e07-ab51-55978257a8bf\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-svr79" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.421845 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48ee479a-ea6a-4831-858a-1cdfaca6762c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xf9qh\" (UID: \"48ee479a-ea6a-4831-858a-1cdfaca6762c\") " pod="openshift-marketplace/marketplace-operator-79b997595-xf9qh" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.421870 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/666ba159-709e-4b10-8d3d-6a7ae785f61f-cert\") pod \"ingress-canary-hk9wh\" (UID: \"666ba159-709e-4b10-8d3d-6a7ae785f61f\") " pod="openshift-ingress-canary/ingress-canary-hk9wh" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.421896 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cbf039a2-0b1a-4284-9e4f-30178313bb09-proxy-tls\") pod \"machine-config-controller-84d6567774-g7l4l\" (UID: \"cbf039a2-0b1a-4284-9e4f-30178313bb09\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g7l4l" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.422048 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbdp4\" (UniqueName: \"kubernetes.io/projected/b2000def-4dbe-4976-a901-111027907fa5-kube-api-access-nbdp4\") pod \"openshift-apiserver-operator-796bbdcf4f-28skr\" (UID: \"b2000def-4dbe-4976-a901-111027907fa5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-28skr" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.422118 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c02b858d-680d-415a-be28-5f382cdaaac1-images\") pod \"machine-config-operator-74547568cd-69txp\" (UID: \"c02b858d-680d-415a-be28-5f382cdaaac1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-69txp" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.422300 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ssdf\" (UniqueName: \"kubernetes.io/projected/4a1856d7-6ca5-475f-8476-b2325d595447-kube-api-access-6ssdf\") pod \"etcd-operator-b45778765-v7zss\" (UID: \"4a1856d7-6ca5-475f-8476-b2325d595447\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7zss" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.422390 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a6a654d4-4e05-4848-ab14-624f78b93cfa-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-fxzq9\" (UID: \"a6a654d4-4e05-4848-ab14-624f78b93cfa\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fxzq9" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.422437 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0f4736c2-dfae-4e07-ab51-55978257a8bf-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-svr79\" (UID: \"0f4736c2-dfae-4e07-ab51-55978257a8bf\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-svr79" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.422533 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7d1b1c88-f1c8-4795-9fed-f3424b1355fa-profile-collector-cert\") pod \"olm-operator-6b444d44fb-tpbjs\" (UID: \"7d1b1c88-f1c8-4795-9fed-f3424b1355fa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tpbjs" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.423544 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1318a733-4e15-40bc-a40c-da929809e25c-serving-cert\") pod \"console-operator-58897d9998-t2scc\" (UID: \"1318a733-4e15-40bc-a40c-da929809e25c\") " pod="openshift-console-operator/console-operator-58897d9998-t2scc" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.424662 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/51c0ab73-bbc1-4f70-afa7-059dec256973-registry-tls\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.429049 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/51c0ab73-bbc1-4f70-afa7-059dec256973-trusted-ca\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.429757 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97dca1c4-6dff-48cd-8e41-c41d0c850fda-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-rc4ml\" (UID: \"97dca1c4-6dff-48cd-8e41-c41d0c850fda\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rc4ml" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.432339 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4a1856d7-6ca5-475f-8476-b2325d595447-etcd-client\") pod \"etcd-operator-b45778765-v7zss\" (UID: \"4a1856d7-6ca5-475f-8476-b2325d595447\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7zss" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.436235 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0f4736c2-dfae-4e07-ab51-55978257a8bf-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-svr79\" (UID: \"0f4736c2-dfae-4e07-ab51-55978257a8bf\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-svr79" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.436244 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d90e94ec-ea22-4ba7-a0b0-7b636dcccf9c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-klw64\" (UID: \"d90e94ec-ea22-4ba7-a0b0-7b636dcccf9c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-klw64" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.436252 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a1856d7-6ca5-475f-8476-b2325d595447-serving-cert\") pod \"etcd-operator-b45778765-v7zss\" (UID: \"4a1856d7-6ca5-475f-8476-b2325d595447\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7zss" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.461793 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krn5w\" (UniqueName: \"kubernetes.io/projected/1318a733-4e15-40bc-a40c-da929809e25c-kube-api-access-krn5w\") pod \"console-operator-58897d9998-t2scc\" (UID: \"1318a733-4e15-40bc-a40c-da929809e25c\") " pod="openshift-console-operator/console-operator-58897d9998-t2scc" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.488427 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/97dca1c4-6dff-48cd-8e41-c41d0c850fda-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-rc4ml\" (UID: \"97dca1c4-6dff-48cd-8e41-c41d0c850fda\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rc4ml" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.512674 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt62b\" (UniqueName: \"kubernetes.io/projected/d90e94ec-ea22-4ba7-a0b0-7b636dcccf9c-kube-api-access-mt62b\") pod \"machine-api-operator-5694c8668f-klw64\" (UID: \"d90e94ec-ea22-4ba7-a0b0-7b636dcccf9c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-klw64" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.522337 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gnc5\" (UniqueName: \"kubernetes.io/projected/51c0ab73-bbc1-4f70-afa7-059dec256973-kube-api-access-2gnc5\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.523690 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9235e424-26c2-4a58-8347-6eeabd8fc282-bound-sa-token\") pod \"ingress-operator-5b745b69d9-hqkkq\" (UID: \"9235e424-26c2-4a58-8347-6eeabd8fc282\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hqkkq" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.523722 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/48ee479a-ea6a-4831-858a-1cdfaca6762c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xf9qh\" (UID: \"48ee479a-ea6a-4831-858a-1cdfaca6762c\") " pod="openshift-marketplace/marketplace-operator-79b997595-xf9qh" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.523760 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cbf039a2-0b1a-4284-9e4f-30178313bb09-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-g7l4l\" (UID: \"cbf039a2-0b1a-4284-9e4f-30178313bb09\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g7l4l" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.523778 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9027b945-8ba9-4e3c-a6ee-21271a3e30d1-serving-cert\") pod \"service-ca-operator-777779d784-72rck\" (UID: \"9027b945-8ba9-4e3c-a6ee-21271a3e30d1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-72rck" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.523794 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwj7b\" (UniqueName: \"kubernetes.io/projected/88ed3262-9f36-4edf-ace6-4f739dcb8070-kube-api-access-mwj7b\") pod \"machine-config-server-vng2k\" (UID: \"88ed3262-9f36-4edf-ace6-4f739dcb8070\") " pod="openshift-machine-config-operator/machine-config-server-vng2k" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.523824 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxp2z\" (UniqueName: \"kubernetes.io/projected/7f023c49-9ed6-4ed3-a6ce-560c3fcb3a58-kube-api-access-bxp2z\") pod \"cluster-image-registry-operator-dc59b4c8b-rmvs5\" (UID: \"7f023c49-9ed6-4ed3-a6ce-560c3fcb3a58\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmvs5" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.523839 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9aeda001-70e0-4e29-b122-e75d98325c1d-profile-collector-cert\") pod \"catalog-operator-68c6474976-j6s5s\" (UID: \"9aeda001-70e0-4e29-b122-e75d98325c1d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j6s5s" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.523855 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57a4e2c7-255f-466f-a75d-3517b390ad06-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-948ch\" (UID: \"57a4e2c7-255f-466f-a75d-3517b390ad06\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-948ch" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.523877 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qh2ct\" (UniqueName: \"kubernetes.io/projected/9aeda001-70e0-4e29-b122-e75d98325c1d-kube-api-access-qh2ct\") pod \"catalog-operator-68c6474976-j6s5s\" (UID: \"9aeda001-70e0-4e29-b122-e75d98325c1d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j6s5s" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.523909 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98d60ae9-773d-4bb7-8dd6-5de5b42bbcc9-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rqvqs\" (UID: \"98d60ae9-773d-4bb7-8dd6-5de5b42bbcc9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rqvqs" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.523924 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7d1b1c88-f1c8-4795-9fed-f3424b1355fa-srv-cert\") pod \"olm-operator-6b444d44fb-tpbjs\" (UID: \"7d1b1c88-f1c8-4795-9fed-f3424b1355fa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tpbjs" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.523943 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.523975 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/153e2e1a-8390-42f3-b959-d3607dfef848-metrics-tls\") pod \"dns-default-xt8qf\" (UID: \"153e2e1a-8390-42f3-b959-d3607dfef848\") " pod="openshift-dns/dns-default-xt8qf" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.523994 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2e152bba-2c0e-4f46-8bc9-279649243e6c-mountpoint-dir\") pod \"csi-hostpathplugin-wkkmt\" (UID: \"2e152bba-2c0e-4f46-8bc9-279649243e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-wkkmt" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.524008 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b-stats-auth\") pod \"router-default-5444994796-h8dsm\" (UID: \"1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b\") " pod="openshift-ingress/router-default-5444994796-h8dsm" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.524024 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/07688099-4b3c-4fae-9eba-b3d7308cf8e6-apiservice-cert\") pod \"packageserver-d55dfcdfc-j4dj6\" (UID: \"07688099-4b3c-4fae-9eba-b3d7308cf8e6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j4dj6" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.524080 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2e152bba-2c0e-4f46-8bc9-279649243e6c-csi-data-dir\") pod \"csi-hostpathplugin-wkkmt\" (UID: \"2e152bba-2c0e-4f46-8bc9-279649243e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-wkkmt" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.524095 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlq5j\" (UniqueName: \"kubernetes.io/projected/666ba159-709e-4b10-8d3d-6a7ae785f61f-kube-api-access-tlq5j\") pod \"ingress-canary-hk9wh\" (UID: \"666ba159-709e-4b10-8d3d-6a7ae785f61f\") " pod="openshift-ingress-canary/ingress-canary-hk9wh" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.524135 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dec7c435-8991-4348-b471-dfc3c15a0001-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-x7fjn\" (UID: \"dec7c435-8991-4348-b471-dfc3c15a0001\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7fjn" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.524154 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9235e424-26c2-4a58-8347-6eeabd8fc282-trusted-ca\") pod \"ingress-operator-5b745b69d9-hqkkq\" (UID: \"9235e424-26c2-4a58-8347-6eeabd8fc282\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hqkkq" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.524180 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2e152bba-2c0e-4f46-8bc9-279649243e6c-registration-dir\") pod \"csi-hostpathplugin-wkkmt\" (UID: \"2e152bba-2c0e-4f46-8bc9-279649243e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-wkkmt" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.524211 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2e152bba-2c0e-4f46-8bc9-279649243e6c-plugins-dir\") pod \"csi-hostpathplugin-wkkmt\" (UID: \"2e152bba-2c0e-4f46-8bc9-279649243e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-wkkmt" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.524225 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9027b945-8ba9-4e3c-a6ee-21271a3e30d1-config\") pod \"service-ca-operator-777779d784-72rck\" (UID: \"9027b945-8ba9-4e3c-a6ee-21271a3e30d1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-72rck" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.524241 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hx5l\" (UniqueName: \"kubernetes.io/projected/2cb92340-d666-48d7-8b9e-5f25c48b546f-kube-api-access-2hx5l\") pod \"multus-admission-controller-857f4d67dd-5cgnl\" (UID: \"2cb92340-d666-48d7-8b9e-5f25c48b546f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5cgnl" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.524477 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ff878878-c8f6-420d-b564-a98660220eba-signing-key\") pod \"service-ca-9c57cc56f-9wk4x\" (UID: \"ff878878-c8f6-420d-b564-a98660220eba\") " pod="openshift-service-ca/service-ca-9c57cc56f-9wk4x" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.524504 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzjks\" (UniqueName: \"kubernetes.io/projected/dec7c435-8991-4348-b471-dfc3c15a0001-kube-api-access-vzjks\") pod \"kube-storage-version-migrator-operator-b67b599dd-x7fjn\" (UID: \"dec7c435-8991-4348-b471-dfc3c15a0001\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7fjn" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.524536 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/07688099-4b3c-4fae-9eba-b3d7308cf8e6-tmpfs\") pod \"packageserver-d55dfcdfc-j4dj6\" (UID: \"07688099-4b3c-4fae-9eba-b3d7308cf8e6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j4dj6" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.524558 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c51acce1-f5f7-44d8-aadf-ae468cf2e29b-secret-volume\") pod \"collect-profiles-29399730-77vnb\" (UID: \"c51acce1-f5f7-44d8-aadf-ae468cf2e29b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-77vnb" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.524582 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b2fe1c31-7dc8-4f55-b853-15de35052479-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-pcnqw\" (UID: \"b2fe1c31-7dc8-4f55-b853-15de35052479\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pcnqw" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.524810 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2xn5\" (UniqueName: \"kubernetes.io/projected/153e2e1a-8390-42f3-b959-d3607dfef848-kube-api-access-k2xn5\") pod \"dns-default-xt8qf\" (UID: \"153e2e1a-8390-42f3-b959-d3607dfef848\") " pod="openshift-dns/dns-default-xt8qf" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.524833 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dec7c435-8991-4348-b471-dfc3c15a0001-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-x7fjn\" (UID: \"dec7c435-8991-4348-b471-dfc3c15a0001\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7fjn" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.524866 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlpjj\" (UniqueName: \"kubernetes.io/projected/c51acce1-f5f7-44d8-aadf-ae468cf2e29b-kube-api-access-jlpjj\") pod \"collect-profiles-29399730-77vnb\" (UID: \"c51acce1-f5f7-44d8-aadf-ae468cf2e29b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-77vnb" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.524883 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b-service-ca-bundle\") pod \"router-default-5444994796-h8dsm\" (UID: \"1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b\") " pod="openshift-ingress/router-default-5444994796-h8dsm" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.524897 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b-default-certificate\") pod \"router-default-5444994796-h8dsm\" (UID: \"1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b\") " pod="openshift-ingress/router-default-5444994796-h8dsm" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.524916 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9235e424-26c2-4a58-8347-6eeabd8fc282-metrics-tls\") pod \"ingress-operator-5b745b69d9-hqkkq\" (UID: \"9235e424-26c2-4a58-8347-6eeabd8fc282\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hqkkq" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.524951 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ff878878-c8f6-420d-b564-a98660220eba-signing-cabundle\") pod \"service-ca-9c57cc56f-9wk4x\" (UID: \"ff878878-c8f6-420d-b564-a98660220eba\") " pod="openshift-service-ca/service-ca-9c57cc56f-9wk4x" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.524970 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7f023c49-9ed6-4ed3-a6ce-560c3fcb3a58-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rmvs5\" (UID: \"7f023c49-9ed6-4ed3-a6ce-560c3fcb3a58\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmvs5" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.524985 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xbvf\" (UniqueName: \"kubernetes.io/projected/48ee479a-ea6a-4831-858a-1cdfaca6762c-kube-api-access-4xbvf\") pod \"marketplace-operator-79b997595-xf9qh\" (UID: \"48ee479a-ea6a-4831-858a-1cdfaca6762c\") " pod="openshift-marketplace/marketplace-operator-79b997595-xf9qh" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.525001 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/88ed3262-9f36-4edf-ace6-4f739dcb8070-certs\") pod \"machine-config-server-vng2k\" (UID: \"88ed3262-9f36-4edf-ace6-4f739dcb8070\") " pod="openshift-machine-config-operator/machine-config-server-vng2k" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.525050 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7f023c49-9ed6-4ed3-a6ce-560c3fcb3a58-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rmvs5\" (UID: \"7f023c49-9ed6-4ed3-a6ce-560c3fcb3a58\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmvs5" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.525067 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2cb92340-d666-48d7-8b9e-5f25c48b546f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-5cgnl\" (UID: \"2cb92340-d666-48d7-8b9e-5f25c48b546f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5cgnl" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.525133 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/07688099-4b3c-4fae-9eba-b3d7308cf8e6-webhook-cert\") pod \"packageserver-d55dfcdfc-j4dj6\" (UID: \"07688099-4b3c-4fae-9eba-b3d7308cf8e6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j4dj6" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.525155 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n48g\" (UniqueName: \"kubernetes.io/projected/a6a654d4-4e05-4848-ab14-624f78b93cfa-kube-api-access-4n48g\") pod \"control-plane-machine-set-operator-78cbb6b69f-fxzq9\" (UID: \"a6a654d4-4e05-4848-ab14-624f78b93cfa\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fxzq9" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.525175 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98d60ae9-773d-4bb7-8dd6-5de5b42bbcc9-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rqvqs\" (UID: \"98d60ae9-773d-4bb7-8dd6-5de5b42bbcc9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rqvqs" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.525224 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx4wh\" (UniqueName: \"kubernetes.io/projected/1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b-kube-api-access-sx4wh\") pod \"router-default-5444994796-h8dsm\" (UID: \"1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b\") " pod="openshift-ingress/router-default-5444994796-h8dsm" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.525301 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cbf039a2-0b1a-4284-9e4f-30178313bb09-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-g7l4l\" (UID: \"cbf039a2-0b1a-4284-9e4f-30178313bb09\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g7l4l" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.525248 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wnrg\" (UniqueName: \"kubernetes.io/projected/04826e9c-2f6b-4215-b334-c52ee5f5e150-kube-api-access-6wnrg\") pod \"dns-operator-744455d44c-k4s28\" (UID: \"04826e9c-2f6b-4215-b334-c52ee5f5e150\") " pod="openshift-dns-operator/dns-operator-744455d44c-k4s28" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.527395 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzssz\" (UniqueName: \"kubernetes.io/projected/07688099-4b3c-4fae-9eba-b3d7308cf8e6-kube-api-access-rzssz\") pod \"packageserver-d55dfcdfc-j4dj6\" (UID: \"07688099-4b3c-4fae-9eba-b3d7308cf8e6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j4dj6" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.527485 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c51acce1-f5f7-44d8-aadf-ae468cf2e29b-config-volume\") pod \"collect-profiles-29399730-77vnb\" (UID: \"c51acce1-f5f7-44d8-aadf-ae468cf2e29b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-77vnb" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.527567 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2e152bba-2c0e-4f46-8bc9-279649243e6c-registration-dir\") pod \"csi-hostpathplugin-wkkmt\" (UID: \"2e152bba-2c0e-4f46-8bc9-279649243e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-wkkmt" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.527613 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2e152bba-2c0e-4f46-8bc9-279649243e6c-mountpoint-dir\") pod \"csi-hostpathplugin-wkkmt\" (UID: \"2e152bba-2c0e-4f46-8bc9-279649243e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-wkkmt" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.527771 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2e152bba-2c0e-4f46-8bc9-279649243e6c-plugins-dir\") pod \"csi-hostpathplugin-wkkmt\" (UID: \"2e152bba-2c0e-4f46-8bc9-279649243e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-wkkmt" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.528325 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twsv9\" (UniqueName: \"kubernetes.io/projected/2e152bba-2c0e-4f46-8bc9-279649243e6c-kube-api-access-twsv9\") pod \"csi-hostpathplugin-wkkmt\" (UID: \"2e152bba-2c0e-4f46-8bc9-279649243e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-wkkmt" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.530214 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9027b945-8ba9-4e3c-a6ee-21271a3e30d1-config\") pod \"service-ca-operator-777779d784-72rck\" (UID: \"9027b945-8ba9-4e3c-a6ee-21271a3e30d1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-72rck" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.530957 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c02b858d-680d-415a-be28-5f382cdaaac1-auth-proxy-config\") pod \"machine-config-operator-74547568cd-69txp\" (UID: \"c02b858d-680d-415a-be28-5f382cdaaac1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-69txp" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.531037 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwbc6\" (UniqueName: \"kubernetes.io/projected/ff878878-c8f6-420d-b564-a98660220eba-kube-api-access-fwbc6\") pod \"service-ca-9c57cc56f-9wk4x\" (UID: \"ff878878-c8f6-420d-b564-a98660220eba\") " pod="openshift-service-ca/service-ca-9c57cc56f-9wk4x" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.531064 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/88ed3262-9f36-4edf-ace6-4f739dcb8070-node-bootstrap-token\") pod \"machine-config-server-vng2k\" (UID: \"88ed3262-9f36-4edf-ace6-4f739dcb8070\") " pod="openshift-machine-config-operator/machine-config-server-vng2k" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.531115 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkvpv\" (UniqueName: \"kubernetes.io/projected/cbf039a2-0b1a-4284-9e4f-30178313bb09-kube-api-access-vkvpv\") pod \"machine-config-controller-84d6567774-g7l4l\" (UID: \"cbf039a2-0b1a-4284-9e4f-30178313bb09\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g7l4l" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.531139 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9aeda001-70e0-4e29-b122-e75d98325c1d-srv-cert\") pod \"catalog-operator-68c6474976-j6s5s\" (UID: \"9aeda001-70e0-4e29-b122-e75d98325c1d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j6s5s" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.531190 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/07688099-4b3c-4fae-9eba-b3d7308cf8e6-tmpfs\") pod \"packageserver-d55dfcdfc-j4dj6\" (UID: \"07688099-4b3c-4fae-9eba-b3d7308cf8e6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j4dj6" Nov 24 11:32:39 crc kubenswrapper[4789]: E1124 11:32:39.533224 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:40.033210899 +0000 UTC m=+142.615682278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.533365 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/48ee479a-ea6a-4831-858a-1cdfaca6762c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xf9qh\" (UID: \"48ee479a-ea6a-4831-858a-1cdfaca6762c\") " pod="openshift-marketplace/marketplace-operator-79b997595-xf9qh" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.534201 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98d60ae9-773d-4bb7-8dd6-5de5b42bbcc9-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rqvqs\" (UID: \"98d60ae9-773d-4bb7-8dd6-5de5b42bbcc9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rqvqs" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.539595 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9235e424-26c2-4a58-8347-6eeabd8fc282-trusted-ca\") pod \"ingress-operator-5b745b69d9-hqkkq\" (UID: \"9235e424-26c2-4a58-8347-6eeabd8fc282\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hqkkq" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.540233 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c02b858d-680d-415a-be28-5f382cdaaac1-auth-proxy-config\") pod \"machine-config-operator-74547568cd-69txp\" (UID: \"c02b858d-680d-415a-be28-5f382cdaaac1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-69txp" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.541673 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c51acce1-f5f7-44d8-aadf-ae468cf2e29b-config-volume\") pod \"collect-profiles-29399730-77vnb\" (UID: \"c51acce1-f5f7-44d8-aadf-ae468cf2e29b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-77vnb" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.541891 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dec7c435-8991-4348-b471-dfc3c15a0001-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-x7fjn\" (UID: \"dec7c435-8991-4348-b471-dfc3c15a0001\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7fjn" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.542015 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2e152bba-2c0e-4f46-8bc9-279649243e6c-csi-data-dir\") pod \"csi-hostpathplugin-wkkmt\" (UID: \"2e152bba-2c0e-4f46-8bc9-279649243e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-wkkmt" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.542389 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b-service-ca-bundle\") pod \"router-default-5444994796-h8dsm\" (UID: \"1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b\") " pod="openshift-ingress/router-default-5444994796-h8dsm" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.547628 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7f023c49-9ed6-4ed3-a6ce-560c3fcb3a58-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rmvs5\" (UID: \"7f023c49-9ed6-4ed3-a6ce-560c3fcb3a58\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmvs5" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.547757 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmxxv\" (UniqueName: \"kubernetes.io/projected/43b17f72-4406-4ea9-99b5-6683ee119e5a-kube-api-access-gmxxv\") pod \"migrator-59844c95c7-mjfmp\" (UID: \"43b17f72-4406-4ea9-99b5-6683ee119e5a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mjfmp" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.547910 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/153e2e1a-8390-42f3-b959-d3607dfef848-config-volume\") pod \"dns-default-xt8qf\" (UID: \"153e2e1a-8390-42f3-b959-d3607dfef848\") " pod="openshift-dns/dns-default-xt8qf" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.547992 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48ee479a-ea6a-4831-858a-1cdfaca6762c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xf9qh\" (UID: \"48ee479a-ea6a-4831-858a-1cdfaca6762c\") " pod="openshift-marketplace/marketplace-operator-79b997595-xf9qh" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.548087 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/666ba159-709e-4b10-8d3d-6a7ae785f61f-cert\") pod \"ingress-canary-hk9wh\" (UID: \"666ba159-709e-4b10-8d3d-6a7ae785f61f\") " pod="openshift-ingress-canary/ingress-canary-hk9wh" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.548155 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cbf039a2-0b1a-4284-9e4f-30178313bb09-proxy-tls\") pod \"machine-config-controller-84d6567774-g7l4l\" (UID: \"cbf039a2-0b1a-4284-9e4f-30178313bb09\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g7l4l" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.548229 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c02b858d-680d-415a-be28-5f382cdaaac1-images\") pod \"machine-config-operator-74547568cd-69txp\" (UID: \"c02b858d-680d-415a-be28-5f382cdaaac1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-69txp" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.548320 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a6a654d4-4e05-4848-ab14-624f78b93cfa-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-fxzq9\" (UID: \"a6a654d4-4e05-4848-ab14-624f78b93cfa\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fxzq9" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.548395 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7d1b1c88-f1c8-4795-9fed-f3424b1355fa-profile-collector-cert\") pod \"olm-operator-6b444d44fb-tpbjs\" (UID: \"7d1b1c88-f1c8-4795-9fed-f3424b1355fa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tpbjs" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.548486 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98d60ae9-773d-4bb7-8dd6-5de5b42bbcc9-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rqvqs\" (UID: \"98d60ae9-773d-4bb7-8dd6-5de5b42bbcc9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rqvqs" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.548568 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq9vc\" (UniqueName: \"kubernetes.io/projected/7d1b1c88-f1c8-4795-9fed-f3424b1355fa-kube-api-access-wq9vc\") pod \"olm-operator-6b444d44fb-tpbjs\" (UID: \"7d1b1c88-f1c8-4795-9fed-f3424b1355fa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tpbjs" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.548657 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b-metrics-certs\") pod \"router-default-5444994796-h8dsm\" (UID: \"1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b\") " pod="openshift-ingress/router-default-5444994796-h8dsm" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.548738 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/57a4e2c7-255f-466f-a75d-3517b390ad06-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-948ch\" (UID: \"57a4e2c7-255f-466f-a75d-3517b390ad06\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-948ch" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.548812 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clcpt\" (UniqueName: \"kubernetes.io/projected/b2fe1c31-7dc8-4f55-b853-15de35052479-kube-api-access-clcpt\") pod \"package-server-manager-789f6589d5-pcnqw\" (UID: \"b2fe1c31-7dc8-4f55-b853-15de35052479\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pcnqw" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.548888 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/04826e9c-2f6b-4215-b334-c52ee5f5e150-metrics-tls\") pod \"dns-operator-744455d44c-k4s28\" (UID: \"04826e9c-2f6b-4215-b334-c52ee5f5e150\") " pod="openshift-dns-operator/dns-operator-744455d44c-k4s28" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.548955 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2e152bba-2c0e-4f46-8bc9-279649243e6c-socket-dir\") pod \"csi-hostpathplugin-wkkmt\" (UID: \"2e152bba-2c0e-4f46-8bc9-279649243e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-wkkmt" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.549032 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-495jl\" (UniqueName: \"kubernetes.io/projected/9027b945-8ba9-4e3c-a6ee-21271a3e30d1-kube-api-access-495jl\") pod \"service-ca-operator-777779d784-72rck\" (UID: \"9027b945-8ba9-4e3c-a6ee-21271a3e30d1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-72rck" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.549118 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c02b858d-680d-415a-be28-5f382cdaaac1-proxy-tls\") pod \"machine-config-operator-74547568cd-69txp\" (UID: \"c02b858d-680d-415a-be28-5f382cdaaac1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-69txp" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.549185 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88gsn\" (UniqueName: \"kubernetes.io/projected/c02b858d-680d-415a-be28-5f382cdaaac1-kube-api-access-88gsn\") pod \"machine-config-operator-74547568cd-69txp\" (UID: \"c02b858d-680d-415a-be28-5f382cdaaac1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-69txp" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.549274 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57a4e2c7-255f-466f-a75d-3517b390ad06-config\") pod \"kube-controller-manager-operator-78b949d7b-948ch\" (UID: \"57a4e2c7-255f-466f-a75d-3517b390ad06\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-948ch" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.549380 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49gb5\" (UniqueName: \"kubernetes.io/projected/9235e424-26c2-4a58-8347-6eeabd8fc282-kube-api-access-49gb5\") pod \"ingress-operator-5b745b69d9-hqkkq\" (UID: \"9235e424-26c2-4a58-8347-6eeabd8fc282\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hqkkq" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.550087 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/153e2e1a-8390-42f3-b959-d3607dfef848-metrics-tls\") pod \"dns-default-xt8qf\" (UID: \"153e2e1a-8390-42f3-b959-d3607dfef848\") " pod="openshift-dns/dns-default-xt8qf" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.551102 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/153e2e1a-8390-42f3-b959-d3607dfef848-config-volume\") pod \"dns-default-xt8qf\" (UID: \"153e2e1a-8390-42f3-b959-d3607dfef848\") " pod="openshift-dns/dns-default-xt8qf" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.551570 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98d60ae9-773d-4bb7-8dd6-5de5b42bbcc9-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rqvqs\" (UID: \"98d60ae9-773d-4bb7-8dd6-5de5b42bbcc9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rqvqs" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.551844 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5lt8v"] Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.552104 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48ee479a-ea6a-4831-858a-1cdfaca6762c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xf9qh\" (UID: \"48ee479a-ea6a-4831-858a-1cdfaca6762c\") " pod="openshift-marketplace/marketplace-operator-79b997595-xf9qh" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.552607 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ff878878-c8f6-420d-b564-a98660220eba-signing-key\") pod \"service-ca-9c57cc56f-9wk4x\" (UID: \"ff878878-c8f6-420d-b564-a98660220eba\") " pod="openshift-service-ca/service-ca-9c57cc56f-9wk4x" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.553232 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9027b945-8ba9-4e3c-a6ee-21271a3e30d1-serving-cert\") pod \"service-ca-operator-777779d784-72rck\" (UID: \"9027b945-8ba9-4e3c-a6ee-21271a3e30d1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-72rck" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.553329 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9aeda001-70e0-4e29-b122-e75d98325c1d-profile-collector-cert\") pod \"catalog-operator-68c6474976-j6s5s\" (UID: \"9aeda001-70e0-4e29-b122-e75d98325c1d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j6s5s" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.553572 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7f023c49-9ed6-4ed3-a6ce-560c3fcb3a58-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rmvs5\" (UID: \"7f023c49-9ed6-4ed3-a6ce-560c3fcb3a58\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmvs5" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.555433 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/07688099-4b3c-4fae-9eba-b3d7308cf8e6-apiservice-cert\") pod \"packageserver-d55dfcdfc-j4dj6\" (UID: \"07688099-4b3c-4fae-9eba-b3d7308cf8e6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j4dj6" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.555825 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7d1b1c88-f1c8-4795-9fed-f3424b1355fa-srv-cert\") pod \"olm-operator-6b444d44fb-tpbjs\" (UID: \"7d1b1c88-f1c8-4795-9fed-f3424b1355fa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tpbjs" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.557045 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b2fe1c31-7dc8-4f55-b853-15de35052479-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-pcnqw\" (UID: \"b2fe1c31-7dc8-4f55-b853-15de35052479\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pcnqw" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.558088 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ff878878-c8f6-420d-b564-a98660220eba-signing-cabundle\") pod \"service-ca-9c57cc56f-9wk4x\" (UID: \"ff878878-c8f6-420d-b564-a98660220eba\") " pod="openshift-service-ca/service-ca-9c57cc56f-9wk4x" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.558093 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/51c0ab73-bbc1-4f70-afa7-059dec256973-bound-sa-token\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.558198 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2e152bba-2c0e-4f46-8bc9-279649243e6c-socket-dir\") pod \"csi-hostpathplugin-wkkmt\" (UID: \"2e152bba-2c0e-4f46-8bc9-279649243e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-wkkmt" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.558618 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c02b858d-680d-415a-be28-5f382cdaaac1-images\") pod \"machine-config-operator-74547568cd-69txp\" (UID: \"c02b858d-680d-415a-be28-5f382cdaaac1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-69txp" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.559994 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57a4e2c7-255f-466f-a75d-3517b390ad06-config\") pod \"kube-controller-manager-operator-78b949d7b-948ch\" (UID: \"57a4e2c7-255f-466f-a75d-3517b390ad06\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-948ch" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.562891 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c51acce1-f5f7-44d8-aadf-ae468cf2e29b-secret-volume\") pod \"collect-profiles-29399730-77vnb\" (UID: \"c51acce1-f5f7-44d8-aadf-ae468cf2e29b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-77vnb" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.565687 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/88ed3262-9f36-4edf-ace6-4f739dcb8070-certs\") pod \"machine-config-server-vng2k\" (UID: \"88ed3262-9f36-4edf-ace6-4f739dcb8070\") " pod="openshift-machine-config-operator/machine-config-server-vng2k" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.565859 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57a4e2c7-255f-466f-a75d-3517b390ad06-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-948ch\" (UID: \"57a4e2c7-255f-466f-a75d-3517b390ad06\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-948ch" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.567579 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-jdbnn"] Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.570356 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/07688099-4b3c-4fae-9eba-b3d7308cf8e6-webhook-cert\") pod \"packageserver-d55dfcdfc-j4dj6\" (UID: \"07688099-4b3c-4fae-9eba-b3d7308cf8e6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j4dj6" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.570355 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b-stats-auth\") pod \"router-default-5444994796-h8dsm\" (UID: \"1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b\") " pod="openshift-ingress/router-default-5444994796-h8dsm" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.570574 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/88ed3262-9f36-4edf-ace6-4f739dcb8070-node-bootstrap-token\") pod \"machine-config-server-vng2k\" (UID: \"88ed3262-9f36-4edf-ace6-4f739dcb8070\") " pod="openshift-machine-config-operator/machine-config-server-vng2k" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.576340 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9aeda001-70e0-4e29-b122-e75d98325c1d-srv-cert\") pod \"catalog-operator-68c6474976-j6s5s\" (UID: \"9aeda001-70e0-4e29-b122-e75d98325c1d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j6s5s" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.576833 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dec7c435-8991-4348-b471-dfc3c15a0001-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-x7fjn\" (UID: \"dec7c435-8991-4348-b471-dfc3c15a0001\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7fjn" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.577059 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b-default-certificate\") pod \"router-default-5444994796-h8dsm\" (UID: \"1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b\") " pod="openshift-ingress/router-default-5444994796-h8dsm" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.577268 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7d1b1c88-f1c8-4795-9fed-f3424b1355fa-profile-collector-cert\") pod \"olm-operator-6b444d44fb-tpbjs\" (UID: \"7d1b1c88-f1c8-4795-9fed-f3424b1355fa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tpbjs" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.577600 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/666ba159-709e-4b10-8d3d-6a7ae785f61f-cert\") pod \"ingress-canary-hk9wh\" (UID: \"666ba159-709e-4b10-8d3d-6a7ae785f61f\") " pod="openshift-ingress-canary/ingress-canary-hk9wh" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.578027 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2cb92340-d666-48d7-8b9e-5f25c48b546f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-5cgnl\" (UID: \"2cb92340-d666-48d7-8b9e-5f25c48b546f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5cgnl" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.579276 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7f023c49-9ed6-4ed3-a6ce-560c3fcb3a58-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rmvs5\" (UID: \"7f023c49-9ed6-4ed3-a6ce-560c3fcb3a58\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmvs5" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.582133 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cbf039a2-0b1a-4284-9e4f-30178313bb09-proxy-tls\") pod \"machine-config-controller-84d6567774-g7l4l\" (UID: \"cbf039a2-0b1a-4284-9e4f-30178313bb09\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g7l4l" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.582795 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c02b858d-680d-415a-be28-5f382cdaaac1-proxy-tls\") pod \"machine-config-operator-74547568cd-69txp\" (UID: \"c02b858d-680d-415a-be28-5f382cdaaac1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-69txp" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.583218 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shhl2\" (UniqueName: \"kubernetes.io/projected/0f4736c2-dfae-4e07-ab51-55978257a8bf-kube-api-access-shhl2\") pod \"cluster-samples-operator-665b6dd947-svr79\" (UID: \"0f4736c2-dfae-4e07-ab51-55978257a8bf\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-svr79" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.583529 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbdp4\" (UniqueName: \"kubernetes.io/projected/b2000def-4dbe-4976-a901-111027907fa5-kube-api-access-nbdp4\") pod \"openshift-apiserver-operator-796bbdcf4f-28skr\" (UID: \"b2000def-4dbe-4976-a901-111027907fa5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-28skr" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.583799 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/04826e9c-2f6b-4215-b334-c52ee5f5e150-metrics-tls\") pod \"dns-operator-744455d44c-k4s28\" (UID: \"04826e9c-2f6b-4215-b334-c52ee5f5e150\") " pod="openshift-dns-operator/dns-operator-744455d44c-k4s28" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.584139 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a6a654d4-4e05-4848-ab14-624f78b93cfa-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-fxzq9\" (UID: \"a6a654d4-4e05-4848-ab14-624f78b93cfa\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fxzq9" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.584858 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b-metrics-certs\") pod \"router-default-5444994796-h8dsm\" (UID: \"1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b\") " pod="openshift-ingress/router-default-5444994796-h8dsm" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.592306 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9235e424-26c2-4a58-8347-6eeabd8fc282-metrics-tls\") pod \"ingress-operator-5b745b69d9-hqkkq\" (UID: \"9235e424-26c2-4a58-8347-6eeabd8fc282\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hqkkq" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.598917 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ssdf\" (UniqueName: \"kubernetes.io/projected/4a1856d7-6ca5-475f-8476-b2325d595447-kube-api-access-6ssdf\") pod \"etcd-operator-b45778765-v7zss\" (UID: \"4a1856d7-6ca5-475f-8476-b2325d595447\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v7zss" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.642656 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9235e424-26c2-4a58-8347-6eeabd8fc282-bound-sa-token\") pod \"ingress-operator-5b745b69d9-hqkkq\" (UID: \"9235e424-26c2-4a58-8347-6eeabd8fc282\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hqkkq" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.653813 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:39 crc kubenswrapper[4789]: E1124 11:32:39.654279 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:40.154266074 +0000 UTC m=+142.736737453 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.675670 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxp2z\" (UniqueName: \"kubernetes.io/projected/7f023c49-9ed6-4ed3-a6ce-560c3fcb3a58-kube-api-access-bxp2z\") pod \"cluster-image-registry-operator-dc59b4c8b-rmvs5\" (UID: \"7f023c49-9ed6-4ed3-a6ce-560c3fcb3a58\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmvs5" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.685831 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwj7b\" (UniqueName: \"kubernetes.io/projected/88ed3262-9f36-4edf-ace6-4f739dcb8070-kube-api-access-mwj7b\") pod \"machine-config-server-vng2k\" (UID: \"88ed3262-9f36-4edf-ace6-4f739dcb8070\") " pod="openshift-machine-config-operator/machine-config-server-vng2k" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.699882 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-svr79" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.710589 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-spvgg"] Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.711272 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-28skr" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.722346 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qh2ct\" (UniqueName: \"kubernetes.io/projected/9aeda001-70e0-4e29-b122-e75d98325c1d-kube-api-access-qh2ct\") pod \"catalog-operator-68c6474976-j6s5s\" (UID: \"9aeda001-70e0-4e29-b122-e75d98325c1d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j6s5s" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.725843 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-klw64" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.726882 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzjks\" (UniqueName: \"kubernetes.io/projected/dec7c435-8991-4348-b471-dfc3c15a0001-kube-api-access-vzjks\") pod \"kube-storage-version-migrator-operator-b67b599dd-x7fjn\" (UID: \"dec7c435-8991-4348-b471-dfc3c15a0001\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7fjn" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.747276 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-v7zss" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.747899 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-t2scc" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.755542 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.755669 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rc4ml" Nov 24 11:32:39 crc kubenswrapper[4789]: E1124 11:32:39.756153 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:40.256140898 +0000 UTC m=+142.838612277 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.758267 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hx5l\" (UniqueName: \"kubernetes.io/projected/2cb92340-d666-48d7-8b9e-5f25c48b546f-kube-api-access-2hx5l\") pod \"multus-admission-controller-857f4d67dd-5cgnl\" (UID: \"2cb92340-d666-48d7-8b9e-5f25c48b546f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5cgnl" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.776144 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-vng2k" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.791207 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-j4swj"] Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.804191 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wnrg\" (UniqueName: \"kubernetes.io/projected/04826e9c-2f6b-4215-b334-c52ee5f5e150-kube-api-access-6wnrg\") pod \"dns-operator-744455d44c-k4s28\" (UID: \"04826e9c-2f6b-4215-b334-c52ee5f5e150\") " pod="openshift-dns-operator/dns-operator-744455d44c-k4s28" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.808037 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7f023c49-9ed6-4ed3-a6ce-560c3fcb3a58-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rmvs5\" (UID: \"7f023c49-9ed6-4ed3-a6ce-560c3fcb3a58\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmvs5" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.818869 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlq5j\" (UniqueName: \"kubernetes.io/projected/666ba159-709e-4b10-8d3d-6a7ae785f61f-kube-api-access-tlq5j\") pod \"ingress-canary-hk9wh\" (UID: \"666ba159-709e-4b10-8d3d-6a7ae785f61f\") " pod="openshift-ingress-canary/ingress-canary-hk9wh" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.820831 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-gtxzr"] Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.828378 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2xn5\" (UniqueName: \"kubernetes.io/projected/153e2e1a-8390-42f3-b959-d3607dfef848-kube-api-access-k2xn5\") pod \"dns-default-xt8qf\" (UID: \"153e2e1a-8390-42f3-b959-d3607dfef848\") " pod="openshift-dns/dns-default-xt8qf" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.856392 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:39 crc kubenswrapper[4789]: E1124 11:32:39.856932 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:40.356916492 +0000 UTC m=+142.939387871 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.856950 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzssz\" (UniqueName: \"kubernetes.io/projected/07688099-4b3c-4fae-9eba-b3d7308cf8e6-kube-api-access-rzssz\") pod \"packageserver-d55dfcdfc-j4dj6\" (UID: \"07688099-4b3c-4fae-9eba-b3d7308cf8e6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j4dj6" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.859394 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlpjj\" (UniqueName: \"kubernetes.io/projected/c51acce1-f5f7-44d8-aadf-ae468cf2e29b-kube-api-access-jlpjj\") pod \"collect-profiles-29399730-77vnb\" (UID: \"c51acce1-f5f7-44d8-aadf-ae468cf2e29b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-77vnb" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.872580 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-kssj7"] Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.875418 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-k4s28" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.884914 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmvs5" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.891291 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twsv9\" (UniqueName: \"kubernetes.io/projected/2e152bba-2c0e-4f46-8bc9-279649243e6c-kube-api-access-twsv9\") pod \"csi-hostpathplugin-wkkmt\" (UID: \"2e152bba-2c0e-4f46-8bc9-279649243e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-wkkmt" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.911220 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkvpv\" (UniqueName: \"kubernetes.io/projected/cbf039a2-0b1a-4284-9e4f-30178313bb09-kube-api-access-vkvpv\") pod \"machine-config-controller-84d6567774-g7l4l\" (UID: \"cbf039a2-0b1a-4284-9e4f-30178313bb09\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g7l4l" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.917579 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7fjn" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.925505 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwbc6\" (UniqueName: \"kubernetes.io/projected/ff878878-c8f6-420d-b564-a98660220eba-kube-api-access-fwbc6\") pod \"service-ca-9c57cc56f-9wk4x\" (UID: \"ff878878-c8f6-420d-b564-a98660220eba\") " pod="openshift-service-ca/service-ca-9c57cc56f-9wk4x" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.929288 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-5cgnl" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.934961 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" event={"ID":"026c0fd3-78be-48ef-81cd-ba63abb9197d","Type":"ContainerStarted","Data":"732a2d563642a1147bac1e1c0ba4ea67847607a15984fb93acc228693def7b9e"} Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.940153 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j6s5s" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.950180 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49gb5\" (UniqueName: \"kubernetes.io/projected/9235e424-26c2-4a58-8347-6eeabd8fc282-kube-api-access-49gb5\") pod \"ingress-operator-5b745b69d9-hqkkq\" (UID: \"9235e424-26c2-4a58-8347-6eeabd8fc282\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hqkkq" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.951060 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lk2l"] Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.951182 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" event={"ID":"22cf157e-ce67-43f4-bbaf-577720728887","Type":"ContainerStarted","Data":"3f14ad47c5d60f1699584bb6348dd4036b030268d78d80e50a72f99471244f26"} Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.957105 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-77vnb" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.958299 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:39 crc kubenswrapper[4789]: E1124 11:32:39.958623 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:40.458610192 +0000 UTC m=+143.041081571 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.958877 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5lt8v" event={"ID":"584e1901-c470-4a3f-9461-7e97f4688399","Type":"ContainerStarted","Data":"2d643dd176cbbbfb94a6977ed6171aa3f70d99a970c73ea87f8c4d28fb513006"} Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.958912 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5lt8v" event={"ID":"584e1901-c470-4a3f-9461-7e97f4688399","Type":"ContainerStarted","Data":"b2841ce3954d8c2a635efc049ca34332f05b37b784e4511e79b020971d4a05b9"} Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.959542 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5lt8v" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.964944 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-9wk4x" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.977723 4789 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-5lt8v container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.977774 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5lt8v" podUID="584e1901-c470-4a3f-9461-7e97f4688399" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.977782 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j4dj6" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.979652 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n48g\" (UniqueName: \"kubernetes.io/projected/a6a654d4-4e05-4848-ab14-624f78b93cfa-kube-api-access-4n48g\") pod \"control-plane-machine-set-operator-78cbb6b69f-fxzq9\" (UID: \"a6a654d4-4e05-4848-ab14-624f78b93cfa\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fxzq9" Nov 24 11:32:39 crc kubenswrapper[4789]: W1124 11:32:39.981181 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88ed3262_9f36_4edf_ace6_4f739dcb8070.slice/crio-da451f9afabe3f9a74563225c91c8e011bbaf7efd8e89582b435b62ac0aa08c1 WatchSource:0}: Error finding container da451f9afabe3f9a74563225c91c8e011bbaf7efd8e89582b435b62ac0aa08c1: Status 404 returned error can't find the container with id da451f9afabe3f9a74563225c91c8e011bbaf7efd8e89582b435b62ac0aa08c1 Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.983119 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fxzq9" Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.984171 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-j4swj" event={"ID":"4372e46e-19ca-487e-b2ee-1fea92a3197d","Type":"ContainerStarted","Data":"fe74cd8802aba7446cac66550d0662301bf653c185f7d8caea30ef60479bccfd"} Nov 24 11:32:39 crc kubenswrapper[4789]: I1124 11:32:39.993210 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmxxv\" (UniqueName: \"kubernetes.io/projected/43b17f72-4406-4ea9-99b5-6683ee119e5a-kube-api-access-gmxxv\") pod \"migrator-59844c95c7-mjfmp\" (UID: \"43b17f72-4406-4ea9-99b5-6683ee119e5a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mjfmp" Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.002116 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xbvf\" (UniqueName: \"kubernetes.io/projected/48ee479a-ea6a-4831-858a-1cdfaca6762c-kube-api-access-4xbvf\") pod \"marketplace-operator-79b997595-xf9qh\" (UID: \"48ee479a-ea6a-4831-858a-1cdfaca6762c\") " pod="openshift-marketplace/marketplace-operator-79b997595-xf9qh" Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.019803 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hqkkq" Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.022391 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z7ndg" event={"ID":"5489e784-b2d8-47f6-87b7-4c0b0786caaf","Type":"ContainerStarted","Data":"377ae12e892478f436ce2810531e4a61de942007ab19a1e71de7850634a1a834"} Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.022422 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z7ndg" event={"ID":"5489e784-b2d8-47f6-87b7-4c0b0786caaf","Type":"ContainerStarted","Data":"5deaa53b44a5a544ba8c0271727e829a57592d5d5ecf3274b6d75f5cd4e82a87"} Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.025625 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/57a4e2c7-255f-466f-a75d-3517b390ad06-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-948ch\" (UID: \"57a4e2c7-255f-466f-a75d-3517b390ad06\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-948ch" Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.037558 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-hk9wh" Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.039129 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-spvgg" event={"ID":"bb760fa5-0dd1-4298-87de-d2cb1a0d3e0b","Type":"ContainerStarted","Data":"eb6b15cc0ef1fae0839f15806eaf1e7e64f5c09cc6b007a866d42866aeb66b69"} Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.042107 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx4wh\" (UniqueName: \"kubernetes.io/projected/1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b-kube-api-access-sx4wh\") pod \"router-default-5444994796-h8dsm\" (UID: \"1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b\") " pod="openshift-ingress/router-default-5444994796-h8dsm" Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.042843 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-xt8qf" Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.052488 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jdbnn" event={"ID":"17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643","Type":"ContainerStarted","Data":"54670962edba0a6878956ae81d67fac7f8cdfec9d74d204a371256aec7f90c17"} Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.060208 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:40 crc kubenswrapper[4789]: E1124 11:32:40.060829 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:40.560791945 +0000 UTC m=+143.143263324 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.066077 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-h8dsm" Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.066745 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq9vc\" (UniqueName: \"kubernetes.io/projected/7d1b1c88-f1c8-4795-9fed-f3424b1355fa-kube-api-access-wq9vc\") pod \"olm-operator-6b444d44fb-tpbjs\" (UID: \"7d1b1c88-f1c8-4795-9fed-f3424b1355fa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tpbjs" Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.067814 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-wkkmt" Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.073661 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-948ch" Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.083088 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98d60ae9-773d-4bb7-8dd6-5de5b42bbcc9-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rqvqs\" (UID: \"98d60ae9-773d-4bb7-8dd6-5de5b42bbcc9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rqvqs" Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.100071 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-495jl\" (UniqueName: \"kubernetes.io/projected/9027b945-8ba9-4e3c-a6ee-21271a3e30d1-kube-api-access-495jl\") pod \"service-ca-operator-777779d784-72rck\" (UID: \"9027b945-8ba9-4e3c-a6ee-21271a3e30d1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-72rck" Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.112124 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rqvqs" Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.125523 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clcpt\" (UniqueName: \"kubernetes.io/projected/b2fe1c31-7dc8-4f55-b853-15de35052479-kube-api-access-clcpt\") pod \"package-server-manager-789f6589d5-pcnqw\" (UID: \"b2fe1c31-7dc8-4f55-b853-15de35052479\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pcnqw" Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.142924 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88gsn\" (UniqueName: \"kubernetes.io/projected/c02b858d-680d-415a-be28-5f382cdaaac1-kube-api-access-88gsn\") pod \"machine-config-operator-74547568cd-69txp\" (UID: \"c02b858d-680d-415a-be28-5f382cdaaac1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-69txp" Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.163982 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:40 crc kubenswrapper[4789]: E1124 11:32:40.164421 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:40.664405298 +0000 UTC m=+143.246876677 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.181223 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-mlcwl"] Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.191280 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g7l4l" Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.199839 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-69txp" Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.208639 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xf9qh" Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.247852 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mjfmp" Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.265370 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:40 crc kubenswrapper[4789]: E1124 11:32:40.265760 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:40.765744767 +0000 UTC m=+143.348216136 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.286989 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-ljwn7"] Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.290667 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tpbjs" Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.300784 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-72rck" Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.310485 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-klw64"] Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.315373 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pcnqw" Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.359512 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-svr79"] Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.359759 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-28skr"] Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.367196 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:40 crc kubenswrapper[4789]: E1124 11:32:40.367449 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:40.867438707 +0000 UTC m=+143.449910086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.370803 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-k4s28"] Nov 24 11:32:40 crc kubenswrapper[4789]: W1124 11:32:40.452102 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9a07607_7a0f_4436_a3bc_9bd2cbf61663.slice/crio-707e56f3e14c9e6be4c0a5f7c120587f7571d2c267d7f3012435986cb28c2707 WatchSource:0}: Error finding container 707e56f3e14c9e6be4c0a5f7c120587f7571d2c267d7f3012435986cb28c2707: Status 404 returned error can't find the container with id 707e56f3e14c9e6be4c0a5f7c120587f7571d2c267d7f3012435986cb28c2707 Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.468091 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:40 crc kubenswrapper[4789]: E1124 11:32:40.468628 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:40.968612052 +0000 UTC m=+143.551083431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.499981 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rc4ml"] Nov 24 11:32:40 crc kubenswrapper[4789]: W1124 11:32:40.536608 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd90e94ec_ea22_4ba7_a0b0_7b636dcccf9c.slice/crio-62cb9fd2eddfffbe7d8ffc04b3bb640aa6cde49556f2406d96cc59c2931ba5a4 WatchSource:0}: Error finding container 62cb9fd2eddfffbe7d8ffc04b3bb640aa6cde49556f2406d96cc59c2931ba5a4: Status 404 returned error can't find the container with id 62cb9fd2eddfffbe7d8ffc04b3bb640aa6cde49556f2406d96cc59c2931ba5a4 Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.538410 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmvs5"] Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.570381 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:40 crc kubenswrapper[4789]: E1124 11:32:40.571229 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:41.071217086 +0000 UTC m=+143.653688465 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.620213 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7fjn"] Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.671702 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:40 crc kubenswrapper[4789]: E1124 11:32:40.677331 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:41.177285857 +0000 UTC m=+143.759757236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.678607 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:40 crc kubenswrapper[4789]: E1124 11:32:40.679180 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:41.179167729 +0000 UTC m=+143.761639108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:40 crc kubenswrapper[4789]: W1124 11:32:40.777446 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f023c49_9ed6_4ed3_a6ce_560c3fcb3a58.slice/crio-5d907ec348379410cb0abab792b3f88576aef3b2af0a1395190b6bae2be842d0 WatchSource:0}: Error finding container 5d907ec348379410cb0abab792b3f88576aef3b2af0a1395190b6bae2be842d0: Status 404 returned error can't find the container with id 5d907ec348379410cb0abab792b3f88576aef3b2af0a1395190b6bae2be842d0 Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.779892 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:40 crc kubenswrapper[4789]: E1124 11:32:40.780172 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:41.280157709 +0000 UTC m=+143.862629088 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.872296 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-v7zss"] Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.882527 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:40 crc kubenswrapper[4789]: E1124 11:32:40.882839 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:41.382826775 +0000 UTC m=+143.965298154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.883104 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-t2scc"] Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.927286 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-5cgnl"] Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.935568 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j4dj6"] Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.958240 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j6s5s"] Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.975997 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399730-77vnb"] Nov 24 11:32:40 crc kubenswrapper[4789]: I1124 11:32:40.982945 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:40 crc kubenswrapper[4789]: E1124 11:32:40.983266 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:41.48325231 +0000 UTC m=+144.065723689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.059123 4789 generic.go:334] "Generic (PLEG): container finished" podID="17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643" containerID="5c2856d234be07aa62cd1be08b479e51e265c9a013a3d36b1fe103496e62be20" exitCode=0 Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.059339 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jdbnn" event={"ID":"17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643","Type":"ContainerDied","Data":"5c2856d234be07aa62cd1be08b479e51e265c9a013a3d36b1fe103496e62be20"} Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.086360 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:41 crc kubenswrapper[4789]: E1124 11:32:41.086658 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:41.586646947 +0000 UTC m=+144.169118326 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.089602 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ljwn7" event={"ID":"c9a07607-7a0f-4436-a3bc-9bd2cbf61663","Type":"ContainerStarted","Data":"707e56f3e14c9e6be4c0a5f7c120587f7571d2c267d7f3012435986cb28c2707"} Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.100999 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rc4ml" event={"ID":"97dca1c4-6dff-48cd-8e41-c41d0c850fda","Type":"ContainerStarted","Data":"7211c5bc0895d6d666a5e36e727c7e97b7a67ff66de9badb65c4134f7da62448"} Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.107925 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-mlcwl" event={"ID":"c20b0775-ba72-4379-b5df-2ff35ffc2704","Type":"ContainerStarted","Data":"21bda1ec446dcdac74d974c178d84237ce120ec7029206b8a62db93f37d61e91"} Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.108845 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-kssj7" event={"ID":"9380ccce-963f-42e6-b182-65e9bbf9f47e","Type":"ContainerStarted","Data":"0cf8aefb2d03c1c163cb0e245c2edc8411cfa9f893c031c003639d41eb5a0cd8"} Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.109905 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-vng2k" event={"ID":"88ed3262-9f36-4edf-ace6-4f739dcb8070","Type":"ContainerStarted","Data":"bd57b457c018d11b0890e8b2df51cc91b625dd7050a96b4c705232f928b82235"} Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.109925 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-vng2k" event={"ID":"88ed3262-9f36-4edf-ace6-4f739dcb8070","Type":"ContainerStarted","Data":"da451f9afabe3f9a74563225c91c8e011bbaf7efd8e89582b435b62ac0aa08c1"} Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.120309 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-klw64" event={"ID":"d90e94ec-ea22-4ba7-a0b0-7b636dcccf9c","Type":"ContainerStarted","Data":"62cb9fd2eddfffbe7d8ffc04b3bb640aa6cde49556f2406d96cc59c2931ba5a4"} Nov 24 11:32:41 crc kubenswrapper[4789]: W1124 11:32:41.120391 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cb92340_d666_48d7_8b9e_5f25c48b546f.slice/crio-f239f42a9ec626b85c380acdc12249cd8291ba3ab9858684ff85af9e68438b7f WatchSource:0}: Error finding container f239f42a9ec626b85c380acdc12249cd8291ba3ab9858684ff85af9e68438b7f: Status 404 returned error can't find the container with id f239f42a9ec626b85c380acdc12249cd8291ba3ab9858684ff85af9e68438b7f Nov 24 11:32:41 crc kubenswrapper[4789]: W1124 11:32:41.133981 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc51acce1_f5f7_44d8_aadf_ae468cf2e29b.slice/crio-a85b515c55ab45f44811c7cfe1b6efd67039ef2e6f50f4766f508a42a71ead64 WatchSource:0}: Error finding container a85b515c55ab45f44811c7cfe1b6efd67039ef2e6f50f4766f508a42a71ead64: Status 404 returned error can't find the container with id a85b515c55ab45f44811c7cfe1b6efd67039ef2e6f50f4766f508a42a71ead64 Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.136921 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lk2l" event={"ID":"c9da2bc3-3945-4a02-8613-39338321441d","Type":"ContainerStarted","Data":"c2400d05bb15627360269a451a63ceb785d8c2dacbd4432ad57dc60487db99e8"} Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.153551 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-k4s28" event={"ID":"04826e9c-2f6b-4215-b334-c52ee5f5e150","Type":"ContainerStarted","Data":"14342b7d6283564ae7a56844e03ff463eadd54f79e4f6a756927ac4dcf3333ca"} Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.163276 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-t2scc" event={"ID":"1318a733-4e15-40bc-a40c-da929809e25c","Type":"ContainerStarted","Data":"c5debbf934a6cf45d94f62d56fb894441f4906293c7c4fc5904cb74856b63705"} Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.174944 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" event={"ID":"026c0fd3-78be-48ef-81cd-ba63abb9197d","Type":"ContainerStarted","Data":"2e159fe72c22ea5dd47644beda978e9ac41ce9336a22775c950f9252e8e684b0"} Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.175570 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.177312 4789 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-bp2hb container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.17:6443/healthz\": dial tcp 10.217.0.17:6443: connect: connection refused" start-of-body= Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.177349 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" podUID="026c0fd3-78be-48ef-81cd-ba63abb9197d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.17:6443/healthz\": dial tcp 10.217.0.17:6443: connect: connection refused" Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.179664 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-28skr" event={"ID":"b2000def-4dbe-4976-a901-111027907fa5","Type":"ContainerStarted","Data":"690ffe4958106aa95e6543dc27be1ba7141e7b04fa672565cb98e1cb7eb66e97"} Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.182036 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-h8dsm" event={"ID":"1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b","Type":"ContainerStarted","Data":"99002dc78ea8c8afc7b28d1d8e986b7f77eb6791f70aec11939d6c30d3265d4d"} Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.182732 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmvs5" event={"ID":"7f023c49-9ed6-4ed3-a6ce-560c3fcb3a58","Type":"ContainerStarted","Data":"5d907ec348379410cb0abab792b3f88576aef3b2af0a1395190b6bae2be842d0"} Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.187857 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:41 crc kubenswrapper[4789]: E1124 11:32:41.188584 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:41.688569832 +0000 UTC m=+144.271041211 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.207535 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-hk9wh"] Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.209049 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fxzq9"] Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.232132 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-xt8qf"] Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.236793 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-j4swj" event={"ID":"4372e46e-19ca-487e-b2ee-1fea92a3197d","Type":"ContainerStarted","Data":"1335bcac9aff2ce299a3bd26ac7e1f352a11123436a5485613284ba1a0d09759"} Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.237421 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-j4swj" Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.238355 4789 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-j4swj container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.238493 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-j4swj" podUID="4372e46e-19ca-487e-b2ee-1fea92a3197d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.240535 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-wkkmt"] Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.249584 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7fjn" event={"ID":"dec7c435-8991-4348-b471-dfc3c15a0001","Type":"ContainerStarted","Data":"b8f9d404e04f713ac9f352eb51a394c0bdf0f8acaf84903d474a1c93d458c565"} Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.289740 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:41 crc kubenswrapper[4789]: E1124 11:32:41.294222 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:41.794195741 +0000 UTC m=+144.376667120 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.296046 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-svr79" event={"ID":"0f4736c2-dfae-4e07-ab51-55978257a8bf","Type":"ContainerStarted","Data":"0b33614e4e77172af8703aed3268866efd1ed56d1f4621c76983ead7d5cae9da"} Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.310448 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5lt8v" Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.391365 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:41 crc kubenswrapper[4789]: E1124 11:32:41.393408 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:41.893392531 +0000 UTC m=+144.475863900 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:41 crc kubenswrapper[4789]: W1124 11:32:41.443117 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod666ba159_709e_4b10_8d3d_6a7ae785f61f.slice/crio-3fae920b4b2d06dd2f619d259349234f9d9f9d165c6252e1f498b19e0b3d2ac7 WatchSource:0}: Error finding container 3fae920b4b2d06dd2f619d259349234f9d9f9d165c6252e1f498b19e0b3d2ac7: Status 404 returned error can't find the container with id 3fae920b4b2d06dd2f619d259349234f9d9f9d165c6252e1f498b19e0b3d2ac7 Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.450560 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-9wk4x"] Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.456234 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-hqkkq"] Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.491959 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5lt8v" podStartSLOduration=117.491945264 podStartE2EDuration="1m57.491945264s" podCreationTimestamp="2025-11-24 11:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:41.491189513 +0000 UTC m=+144.073660892" watchObservedRunningTime="2025-11-24 11:32:41.491945264 +0000 UTC m=+144.074416643" Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.493218 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:41 crc kubenswrapper[4789]: E1124 11:32:41.493537 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:41.993523867 +0000 UTC m=+144.575995246 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.596385 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:41 crc kubenswrapper[4789]: E1124 11:32:41.596545 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:42.096521423 +0000 UTC m=+144.678992802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.597707 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:41 crc kubenswrapper[4789]: E1124 11:32:41.598096 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:42.098082556 +0000 UTC m=+144.680553935 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:41 crc kubenswrapper[4789]: W1124 11:32:41.603280 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff878878_c8f6_420d_b564_a98660220eba.slice/crio-8eaa44142bbfc64b0a5189452d670d8c358e7b31ef1d0725b8dc209a5322fb27 WatchSource:0}: Error finding container 8eaa44142bbfc64b0a5189452d670d8c358e7b31ef1d0725b8dc209a5322fb27: Status 404 returned error can't find the container with id 8eaa44142bbfc64b0a5189452d670d8c358e7b31ef1d0725b8dc209a5322fb27 Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.669187 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rqvqs"] Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.700630 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:41 crc kubenswrapper[4789]: E1124 11:32:41.700895 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:42.200881386 +0000 UTC m=+144.783352765 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.745292 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-g7l4l"] Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.765017 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-948ch"] Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.792660 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tpbjs"] Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.800343 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" podStartSLOduration=118.800328263 podStartE2EDuration="1m58.800328263s" podCreationTimestamp="2025-11-24 11:30:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:41.800056486 +0000 UTC m=+144.382527865" watchObservedRunningTime="2025-11-24 11:32:41.800328263 +0000 UTC m=+144.382799642" Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.813321 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:41 crc kubenswrapper[4789]: E1124 11:32:41.813689 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:42.313677682 +0000 UTC m=+144.896149051 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:41 crc kubenswrapper[4789]: W1124 11:32:41.817336 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbf039a2_0b1a_4284_9e4f_30178313bb09.slice/crio-0b0c0a3d8b36c51fe581e3135479077d007ef20fc6d06308bd5215e7b13fbe68 WatchSource:0}: Error finding container 0b0c0a3d8b36c51fe581e3135479077d007ef20fc6d06308bd5215e7b13fbe68: Status 404 returned error can't find the container with id 0b0c0a3d8b36c51fe581e3135479077d007ef20fc6d06308bd5215e7b13fbe68 Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.904696 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pcnqw"] Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.906775 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-mjfmp"] Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.916832 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xf9qh"] Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.917136 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:41 crc kubenswrapper[4789]: E1124 11:32:41.917374 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:42.417357177 +0000 UTC m=+144.999828556 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.917569 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:41 crc kubenswrapper[4789]: E1124 11:32:41.917987 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:42.417975443 +0000 UTC m=+145.000446822 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:41 crc kubenswrapper[4789]: I1124 11:32:41.937760 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-69txp"] Nov 24 11:32:41 crc kubenswrapper[4789]: W1124 11:32:41.983817 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2fe1c31_7dc8_4f55_b853_15de35052479.slice/crio-036200fa50b49128f5902e224a172e887c381b0265ec5ecbc8a36996a7716827 WatchSource:0}: Error finding container 036200fa50b49128f5902e224a172e887c381b0265ec5ecbc8a36996a7716827: Status 404 returned error can't find the container with id 036200fa50b49128f5902e224a172e887c381b0265ec5ecbc8a36996a7716827 Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.018731 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:42 crc kubenswrapper[4789]: E1124 11:32:42.019309 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:42.519269413 +0000 UTC m=+145.101740792 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.058335 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-72rck"] Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.121002 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:42 crc kubenswrapper[4789]: E1124 11:32:42.121399 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:42.621386134 +0000 UTC m=+145.203857513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.160408 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-j4swj" podStartSLOduration=119.16038104 podStartE2EDuration="1m59.16038104s" podCreationTimestamp="2025-11-24 11:30:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:42.15817384 +0000 UTC m=+144.740645219" watchObservedRunningTime="2025-11-24 11:32:42.16038104 +0000 UTC m=+144.742852419" Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.222779 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:42 crc kubenswrapper[4789]: E1124 11:32:42.223170 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:42.723154705 +0000 UTC m=+145.305626084 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.326249 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:42 crc kubenswrapper[4789]: E1124 11:32:42.326703 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:42.826693455 +0000 UTC m=+145.409164824 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.336870 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wkkmt" event={"ID":"2e152bba-2c0e-4f46-8bc9-279649243e6c","Type":"ContainerStarted","Data":"b1a9d6093a43e4af83ad19d1294919267a7e3a423908abd5f8e3ce0ba9020950"} Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.374442 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fxzq9" event={"ID":"a6a654d4-4e05-4848-ab14-624f78b93cfa","Type":"ContainerStarted","Data":"544a41b7b98e8860cc5fa122241bf88440fbb839dc61d7b0e08ae8a517c6e9ef"} Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.387697 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-948ch" event={"ID":"57a4e2c7-255f-466f-a75d-3517b390ad06","Type":"ContainerStarted","Data":"432faf7778f99eb93ec4feefeae7900914a3eb6ddd1ccfb806e2eaee3c56f8bd"} Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.427409 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:42 crc kubenswrapper[4789]: E1124 11:32:42.427754 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:42.927739656 +0000 UTC m=+145.510211035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.431919 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xf9qh" event={"ID":"48ee479a-ea6a-4831-858a-1cdfaca6762c","Type":"ContainerStarted","Data":"c39475e2940c01ab00639ad20049fd51c057485ed8a9347a00f7125681397428"} Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.433322 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ljwn7" event={"ID":"c9a07607-7a0f-4436-a3bc-9bd2cbf61663","Type":"ContainerStarted","Data":"67cb5c038d9a3da38c1770750bc406619678d32a8b33c32bfe90fd7030d0b93a"} Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.440292 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g7l4l" event={"ID":"cbf039a2-0b1a-4284-9e4f-30178313bb09","Type":"ContainerStarted","Data":"0b0c0a3d8b36c51fe581e3135479077d007ef20fc6d06308bd5215e7b13fbe68"} Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.446094 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7fjn" event={"ID":"dec7c435-8991-4348-b471-dfc3c15a0001","Type":"ContainerStarted","Data":"a5d4df00f8f68c8a69dbb640cc344cd7a05505942908a458890fe690f3a86056"} Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.450445 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rqvqs" event={"ID":"98d60ae9-773d-4bb7-8dd6-5de5b42bbcc9","Type":"ContainerStarted","Data":"27e1de5e7a0fed5ac0945b92b3e016845a7d3bf9ab4117ac1851b07333da92aa"} Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.452560 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-h8dsm" event={"ID":"1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b","Type":"ContainerStarted","Data":"141fe4455b583c5fcaf348daa32c9502ac97edd9ce15797ab81a61b0b8775b95"} Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.466703 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xt8qf" event={"ID":"153e2e1a-8390-42f3-b959-d3607dfef848","Type":"ContainerStarted","Data":"dfe832522f9f7e749ee0a2e2f4532ffc3e7cc9a1090273b12bfa31eec06c58c7"} Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.469440 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-k4s28" event={"ID":"04826e9c-2f6b-4215-b334-c52ee5f5e150","Type":"ContainerStarted","Data":"b9229ef645797208fa9f3522b6e8eb7996aa6252015da876d078b32044f6c415"} Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.485156 4789 generic.go:334] "Generic (PLEG): container finished" podID="22cf157e-ce67-43f4-bbaf-577720728887" containerID="1c9ce35bef3267d6c82ea08f30baa5892064450eddd34cbdbba8f17d98dd329b" exitCode=0 Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.485266 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" event={"ID":"22cf157e-ce67-43f4-bbaf-577720728887","Type":"ContainerDied","Data":"1c9ce35bef3267d6c82ea08f30baa5892064450eddd34cbdbba8f17d98dd329b"} Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.529994 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.540510 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j4dj6" event={"ID":"07688099-4b3c-4fae-9eba-b3d7308cf8e6","Type":"ContainerStarted","Data":"332f6d82b725bf3e64c2ddda469880edb8942d441f55f2bc9bd13e31af3f884d"} Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.541371 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j4dj6" Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.541793 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7fjn" podStartSLOduration=118.541779468 podStartE2EDuration="1m58.541779468s" podCreationTimestamp="2025-11-24 11:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:42.540480042 +0000 UTC m=+145.122951421" watchObservedRunningTime="2025-11-24 11:32:42.541779468 +0000 UTC m=+145.124250847" Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.542657 4789 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-j4dj6 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" start-of-body= Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.542685 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j4dj6" podUID="07688099-4b3c-4fae-9eba-b3d7308cf8e6" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.542901 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-ljwn7" podStartSLOduration=119.542895469 podStartE2EDuration="1m59.542895469s" podCreationTimestamp="2025-11-24 11:30:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:42.495279173 +0000 UTC m=+145.077750562" watchObservedRunningTime="2025-11-24 11:32:42.542895469 +0000 UTC m=+145.125366848" Nov 24 11:32:42 crc kubenswrapper[4789]: E1124 11:32:42.543018 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:43.043002372 +0000 UTC m=+145.625473751 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.544503 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pcnqw" event={"ID":"b2fe1c31-7dc8-4f55-b853-15de35052479","Type":"ContainerStarted","Data":"036200fa50b49128f5902e224a172e887c381b0265ec5ecbc8a36996a7716827"} Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.547176 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-28skr" event={"ID":"b2000def-4dbe-4976-a901-111027907fa5","Type":"ContainerStarted","Data":"4ae3aa4e3d66922db8bcd6d5473e8842d5d372a24a7497cb474c0808ca1f587c"} Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.564409 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-klw64" event={"ID":"d90e94ec-ea22-4ba7-a0b0-7b636dcccf9c","Type":"ContainerStarted","Data":"34712c7ba366c4773ec640250899ebece2ef757c34f898214c75b0865aef6bc3"} Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.569511 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-hk9wh" event={"ID":"666ba159-709e-4b10-8d3d-6a7ae785f61f","Type":"ContainerStarted","Data":"3fae920b4b2d06dd2f619d259349234f9d9f9d165c6252e1f498b19e0b3d2ac7"} Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.617316 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-h8dsm" podStartSLOduration=119.617302254 podStartE2EDuration="1m59.617302254s" podCreationTimestamp="2025-11-24 11:30:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:42.615088993 +0000 UTC m=+145.197560372" watchObservedRunningTime="2025-11-24 11:32:42.617302254 +0000 UTC m=+145.199773633" Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.625978 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j6s5s" event={"ID":"9aeda001-70e0-4e29-b122-e75d98325c1d","Type":"ContainerStarted","Data":"960f45dd66882c8da1a16f210a40ec6fb80e196725442e489a15ef119e67a7a0"} Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.627998 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j6s5s" Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.633272 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:42 crc kubenswrapper[4789]: E1124 11:32:42.633438 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:43.133416419 +0000 UTC m=+145.715887798 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.633765 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:42 crc kubenswrapper[4789]: E1124 11:32:42.635660 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:43.135651801 +0000 UTC m=+145.718123180 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.635662 4789 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-j6s5s container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.635696 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j6s5s" podUID="9aeda001-70e0-4e29-b122-e75d98325c1d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.671289 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-28skr" podStartSLOduration=119.671274415 podStartE2EDuration="1m59.671274415s" podCreationTimestamp="2025-11-24 11:30:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:42.668817918 +0000 UTC m=+145.251289297" watchObservedRunningTime="2025-11-24 11:32:42.671274415 +0000 UTC m=+145.253745794" Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.715737 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hqkkq" event={"ID":"9235e424-26c2-4a58-8347-6eeabd8fc282","Type":"ContainerStarted","Data":"589cb187180677e744eb3f7728f330431b08d111cddc7c6de4a983a21888799e"} Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.735936 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:42 crc kubenswrapper[4789]: E1124 11:32:42.737911 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:43.237878365 +0000 UTC m=+145.820349744 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.738281 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.738737 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z7ndg" event={"ID":"5489e784-b2d8-47f6-87b7-4c0b0786caaf","Type":"ContainerStarted","Data":"df9602790d2870863beb46c803dd904086d8adc9605e9aa5f5d97350f5e83695"} Nov 24 11:32:42 crc kubenswrapper[4789]: E1124 11:32:42.739424 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:43.239408957 +0000 UTC m=+145.821880336 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.749740 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-v7zss" event={"ID":"4a1856d7-6ca5-475f-8476-b2325d595447","Type":"ContainerStarted","Data":"e971a0cd45f359d75465b44583f9649b2599a302ba9646c909b541dc70d1cd97"} Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.754946 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-svr79" event={"ID":"0f4736c2-dfae-4e07-ab51-55978257a8bf","Type":"ContainerStarted","Data":"23b3067e0949e555ee6a58098f3e51fd6747d79933dc4328bb8c4a5edcf8f4fb"} Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.764330 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lk2l" event={"ID":"c9da2bc3-3945-4a02-8613-39338321441d","Type":"ContainerStarted","Data":"794e98bf62f07649eddf6c0e727ee83cbec77825926905c40926ec189cd6d9c7"} Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.772182 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mjfmp" event={"ID":"43b17f72-4406-4ea9-99b5-6683ee119e5a","Type":"ContainerStarted","Data":"00bd51bead07c8da4b6e05893d852b9649e8633fc9fcd4c75f97e56271367d69"} Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.773831 4789 generic.go:334] "Generic (PLEG): container finished" podID="bb760fa5-0dd1-4298-87de-d2cb1a0d3e0b" containerID="d78cb7e605a53d3cc2ab8ec902f94cbe2edd88eaf6f59bf04acd15d76b8a88f8" exitCode=0 Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.773877 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-spvgg" event={"ID":"bb760fa5-0dd1-4298-87de-d2cb1a0d3e0b","Type":"ContainerDied","Data":"d78cb7e605a53d3cc2ab8ec902f94cbe2edd88eaf6f59bf04acd15d76b8a88f8"} Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.781154 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j6s5s" podStartSLOduration=118.781140421 podStartE2EDuration="1m58.781140421s" podCreationTimestamp="2025-11-24 11:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:42.780770751 +0000 UTC m=+145.363242130" watchObservedRunningTime="2025-11-24 11:32:42.781140421 +0000 UTC m=+145.363611800" Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.781753 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j4dj6" podStartSLOduration=118.781747577 podStartE2EDuration="1m58.781747577s" podCreationTimestamp="2025-11-24 11:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:42.727146699 +0000 UTC m=+145.309618078" watchObservedRunningTime="2025-11-24 11:32:42.781747577 +0000 UTC m=+145.364218956" Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.782899 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rc4ml" event={"ID":"97dca1c4-6dff-48cd-8e41-c41d0c850fda","Type":"ContainerStarted","Data":"8b385e7bb80cf171416a1f85e8964b55b4c99bd917cfd25c5985b8635efbf733"} Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.786198 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-kssj7" event={"ID":"9380ccce-963f-42e6-b182-65e9bbf9f47e","Type":"ContainerStarted","Data":"e69bc749b60a9ad61ee354b3c27cc29d07305cb1bec95092f684d6654f5e61d8"} Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.792316 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-mlcwl" event={"ID":"c20b0775-ba72-4379-b5df-2ff35ffc2704","Type":"ContainerStarted","Data":"9711a9658b13cf35e6122d458e96539c0656b3fd253ccf072fd2f3216f887bda"} Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.792881 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-mlcwl" Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.794729 4789 patch_prober.go:28] interesting pod/downloads-7954f5f757-mlcwl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.794864 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mlcwl" podUID="c20b0775-ba72-4379-b5df-2ff35ffc2704" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.795852 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-9wk4x" event={"ID":"ff878878-c8f6-420d-b564-a98660220eba","Type":"ContainerStarted","Data":"8eaa44142bbfc64b0a5189452d670d8c358e7b31ef1d0725b8dc209a5322fb27"} Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.797994 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-72rck" event={"ID":"9027b945-8ba9-4e3c-a6ee-21271a3e30d1","Type":"ContainerStarted","Data":"2af5490e1419f77aae1909d11f3c2ce2577757f1ca3054d8badb8cce47df8b07"} Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.798168 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z7ndg" podStartSLOduration=119.798159241 podStartE2EDuration="1m59.798159241s" podCreationTimestamp="2025-11-24 11:30:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:42.794721646 +0000 UTC m=+145.377193025" watchObservedRunningTime="2025-11-24 11:32:42.798159241 +0000 UTC m=+145.380630610" Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.814243 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-77vnb" event={"ID":"c51acce1-f5f7-44d8-aadf-ae468cf2e29b","Type":"ContainerStarted","Data":"4623592cea64378ecbebfdd646e0ed0cedeb82b45bc21203235fef69e62288f2"} Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.814289 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-77vnb" event={"ID":"c51acce1-f5f7-44d8-aadf-ae468cf2e29b","Type":"ContainerStarted","Data":"a85b515c55ab45f44811c7cfe1b6efd67039ef2e6f50f4766f508a42a71ead64"} Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.821586 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tpbjs" event={"ID":"7d1b1c88-f1c8-4795-9fed-f3424b1355fa","Type":"ContainerStarted","Data":"224721fdf7dbb0d49607ea3dda24ac4b6e082f0823ab40d0b7c386dad67fcf1a"} Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.824607 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-69txp" event={"ID":"c02b858d-680d-415a-be28-5f382cdaaac1","Type":"ContainerStarted","Data":"f8ab1d4ad15c3812e5b43d7c978545065ddec68bcf94f866cbf4546c053053eb"} Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.840553 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:42 crc kubenswrapper[4789]: E1124 11:32:42.841811 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:43.341789896 +0000 UTC m=+145.924261375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.842079 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-5cgnl" event={"ID":"2cb92340-d666-48d7-8b9e-5f25c48b546f","Type":"ContainerStarted","Data":"f239f42a9ec626b85c380acdc12249cd8291ba3ab9858684ff85af9e68438b7f"} Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.863903 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-j4swj" Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.935866 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lk2l" podStartSLOduration=119.935849794 podStartE2EDuration="1m59.935849794s" podCreationTimestamp="2025-11-24 11:30:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:42.875533598 +0000 UTC m=+145.458004977" watchObservedRunningTime="2025-11-24 11:32:42.935849794 +0000 UTC m=+145.518321163" Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.936276 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-9wk4x" podStartSLOduration=118.936270507 podStartE2EDuration="1m58.936270507s" podCreationTimestamp="2025-11-24 11:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:42.933970133 +0000 UTC m=+145.516441502" watchObservedRunningTime="2025-11-24 11:32:42.936270507 +0000 UTC m=+145.518741886" Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.942577 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:42 crc kubenswrapper[4789]: E1124 11:32:42.948177 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:43.448162505 +0000 UTC m=+146.030633884 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:42 crc kubenswrapper[4789]: I1124 11:32:42.959228 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:32:43 crc kubenswrapper[4789]: I1124 11:32:43.035930 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rc4ml" podStartSLOduration=120.035914859 podStartE2EDuration="2m0.035914859s" podCreationTimestamp="2025-11-24 11:30:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:43.035310502 +0000 UTC m=+145.617781881" watchObservedRunningTime="2025-11-24 11:32:43.035914859 +0000 UTC m=+145.618386238" Nov 24 11:32:43 crc kubenswrapper[4789]: I1124 11:32:43.051015 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:43 crc kubenswrapper[4789]: E1124 11:32:43.051414 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:43.551396497 +0000 UTC m=+146.133867876 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:43 crc kubenswrapper[4789]: I1124 11:32:43.068843 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-h8dsm" Nov 24 11:32:43 crc kubenswrapper[4789]: I1124 11:32:43.084172 4789 patch_prober.go:28] interesting pod/router-default-5444994796-h8dsm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:32:43 crc kubenswrapper[4789]: [-]has-synced failed: reason withheld Nov 24 11:32:43 crc kubenswrapper[4789]: [+]process-running ok Nov 24 11:32:43 crc kubenswrapper[4789]: healthz check failed Nov 24 11:32:43 crc kubenswrapper[4789]: I1124 11:32:43.084223 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h8dsm" podUID="1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:32:43 crc kubenswrapper[4789]: I1124 11:32:43.102435 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-77vnb" podStartSLOduration=120.102422196 podStartE2EDuration="2m0.102422196s" podCreationTimestamp="2025-11-24 11:30:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:43.101910262 +0000 UTC m=+145.684381631" watchObservedRunningTime="2025-11-24 11:32:43.102422196 +0000 UTC m=+145.684893565" Nov 24 11:32:43 crc kubenswrapper[4789]: I1124 11:32:43.156871 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:43 crc kubenswrapper[4789]: E1124 11:32:43.157384 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:43.657373455 +0000 UTC m=+146.239844834 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:43 crc kubenswrapper[4789]: I1124 11:32:43.173853 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-kssj7" podStartSLOduration=120.173832939 podStartE2EDuration="2m0.173832939s" podCreationTimestamp="2025-11-24 11:30:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:43.155318768 +0000 UTC m=+145.737790147" watchObservedRunningTime="2025-11-24 11:32:43.173832939 +0000 UTC m=+145.756304318" Nov 24 11:32:43 crc kubenswrapper[4789]: I1124 11:32:43.181858 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-vng2k" podStartSLOduration=6.181842931 podStartE2EDuration="6.181842931s" podCreationTimestamp="2025-11-24 11:32:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:43.181275105 +0000 UTC m=+145.763746484" watchObservedRunningTime="2025-11-24 11:32:43.181842931 +0000 UTC m=+145.764314310" Nov 24 11:32:43 crc kubenswrapper[4789]: I1124 11:32:43.236644 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-mlcwl" podStartSLOduration=120.236630654 podStartE2EDuration="2m0.236630654s" podCreationTimestamp="2025-11-24 11:30:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:43.235229115 +0000 UTC m=+145.817700494" watchObservedRunningTime="2025-11-24 11:32:43.236630654 +0000 UTC m=+145.819102033" Nov 24 11:32:43 crc kubenswrapper[4789]: I1124 11:32:43.262956 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:43 crc kubenswrapper[4789]: E1124 11:32:43.263302 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:43.763288301 +0000 UTC m=+146.345759680 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:43 crc kubenswrapper[4789]: I1124 11:32:43.368916 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:43 crc kubenswrapper[4789]: E1124 11:32:43.369310 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:43.869294799 +0000 UTC m=+146.451766178 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:43 crc kubenswrapper[4789]: I1124 11:32:43.469866 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:43 crc kubenswrapper[4789]: E1124 11:32:43.470202 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:43.970187757 +0000 UTC m=+146.552659136 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:43 crc kubenswrapper[4789]: I1124 11:32:43.573244 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:43 crc kubenswrapper[4789]: E1124 11:32:43.573605 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:44.073592994 +0000 UTC m=+146.656064373 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:43 crc kubenswrapper[4789]: I1124 11:32:43.673992 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:43 crc kubenswrapper[4789]: E1124 11:32:43.674351 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:44.174330967 +0000 UTC m=+146.756802346 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:43 crc kubenswrapper[4789]: I1124 11:32:43.776507 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:43 crc kubenswrapper[4789]: E1124 11:32:43.777099 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:44.277080595 +0000 UTC m=+146.859551984 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:43 crc kubenswrapper[4789]: I1124 11:32:43.889212 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:43 crc kubenswrapper[4789]: E1124 11:32:43.889651 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:44.389633235 +0000 UTC m=+146.972104614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:43 crc kubenswrapper[4789]: I1124 11:32:43.893717 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j4dj6" event={"ID":"07688099-4b3c-4fae-9eba-b3d7308cf8e6","Type":"ContainerStarted","Data":"acfcc6ced297a28018b9edfe81296359b9c0a91984c85491ebdeff14e66215e0"} Nov 24 11:32:43 crc kubenswrapper[4789]: I1124 11:32:43.894265 4789 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-j4dj6 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" start-of-body= Nov 24 11:32:43 crc kubenswrapper[4789]: I1124 11:32:43.894306 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j4dj6" podUID="07688099-4b3c-4fae-9eba-b3d7308cf8e6" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" Nov 24 11:32:43 crc kubenswrapper[4789]: I1124 11:32:43.960953 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g7l4l" event={"ID":"cbf039a2-0b1a-4284-9e4f-30178313bb09","Type":"ContainerStarted","Data":"1df8cf6338ef3619a6aa7e7073c9272a52b5b8f3a4705d20713c208e532cd4af"} Nov 24 11:32:43 crc kubenswrapper[4789]: I1124 11:32:43.988820 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-v7zss" event={"ID":"4a1856d7-6ca5-475f-8476-b2325d595447","Type":"ContainerStarted","Data":"df208b86a70eb17af7fcc8b865861590626999d05f972565b0509bab41bfb5bb"} Nov 24 11:32:43 crc kubenswrapper[4789]: I1124 11:32:43.990610 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:43 crc kubenswrapper[4789]: E1124 11:32:43.991353 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:44.491340665 +0000 UTC m=+147.073812044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.043128 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fxzq9" event={"ID":"a6a654d4-4e05-4848-ab14-624f78b93cfa","Type":"ContainerStarted","Data":"45e1a44fdc87332ca587995d7354bbc5a100cd96147c5c41dd0199da54f236a9"} Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.059332 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-9wk4x" event={"ID":"ff878878-c8f6-420d-b564-a98660220eba","Type":"ContainerStarted","Data":"55aa68bb828a0b716eb45b998a8ee4a4e8996d14861bce14d32bff8775153104"} Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.061360 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-v7zss" podStartSLOduration=121.061351299 podStartE2EDuration="2m1.061351299s" podCreationTimestamp="2025-11-24 11:30:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:44.060399013 +0000 UTC m=+146.642870392" watchObservedRunningTime="2025-11-24 11:32:44.061351299 +0000 UTC m=+146.643822678" Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.079496 4789 patch_prober.go:28] interesting pod/router-default-5444994796-h8dsm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:32:44 crc kubenswrapper[4789]: [-]has-synced failed: reason withheld Nov 24 11:32:44 crc kubenswrapper[4789]: [+]process-running ok Nov 24 11:32:44 crc kubenswrapper[4789]: healthz check failed Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.079541 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h8dsm" podUID="1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.090110 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-948ch" event={"ID":"57a4e2c7-255f-466f-a75d-3517b390ad06","Type":"ContainerStarted","Data":"82f4437f2ca39ec9bed3bb3e965cc68792a445745e08c5a09c78b62a04295163"} Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.094925 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:44 crc kubenswrapper[4789]: E1124 11:32:44.095314 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:44.595287026 +0000 UTC m=+147.177758405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.134536 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-72rck" event={"ID":"9027b945-8ba9-4e3c-a6ee-21271a3e30d1","Type":"ContainerStarted","Data":"e386f27ba0acabbf6a69a99047cde988f5163101bf53fff6e75c5606adf32e08"} Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.148664 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-k4s28" podStartSLOduration=121.148652681 podStartE2EDuration="2m1.148652681s" podCreationTimestamp="2025-11-24 11:30:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:44.146924513 +0000 UTC m=+146.729395892" watchObservedRunningTime="2025-11-24 11:32:44.148652681 +0000 UTC m=+146.731124060" Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.148950 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fxzq9" podStartSLOduration=120.148947119 podStartE2EDuration="2m0.148947119s" podCreationTimestamp="2025-11-24 11:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:44.118775445 +0000 UTC m=+146.701246824" watchObservedRunningTime="2025-11-24 11:32:44.148947119 +0000 UTC m=+146.731418498" Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.149755 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xf9qh" event={"ID":"48ee479a-ea6a-4831-858a-1cdfaca6762c","Type":"ContainerStarted","Data":"79249acf6e5e50a690e93bb69241f6f7c3d7b4100da7a95869d349a43603f727"} Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.150494 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-xf9qh" Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.152291 4789 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xf9qh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.152321 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xf9qh" podUID="48ee479a-ea6a-4831-858a-1cdfaca6762c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.195801 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-69txp" event={"ID":"c02b858d-680d-415a-be28-5f382cdaaac1","Type":"ContainerStarted","Data":"6bff3753d97b992495937107387c9c32318c2cf1eaa615ebd4096deb62babbb8"} Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.195838 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-5cgnl" event={"ID":"2cb92340-d666-48d7-8b9e-5f25c48b546f","Type":"ContainerStarted","Data":"518f8323cbd0d265f5634d6ce67ed5207d029e258d21070d20fc8d744b3f5804"} Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.196416 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:44 crc kubenswrapper[4789]: E1124 11:32:44.198547 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:44.698533339 +0000 UTC m=+147.281004828 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.209210 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-948ch" podStartSLOduration=121.209195964 podStartE2EDuration="2m1.209195964s" podCreationTimestamp="2025-11-24 11:30:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:44.208933237 +0000 UTC m=+146.791404616" watchObservedRunningTime="2025-11-24 11:32:44.209195964 +0000 UTC m=+146.791667343" Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.214625 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-svr79" event={"ID":"0f4736c2-dfae-4e07-ab51-55978257a8bf","Type":"ContainerStarted","Data":"9ead2ddfb3890f6474685ad5486eb8429582c1b3042b47e55e396af362873c95"} Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.278783 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jdbnn" event={"ID":"17eb56ae-d65c-4d0e-a7d5-b2f46c9d5643","Type":"ContainerStarted","Data":"b430d6610dbf36c82fdd196230d5fbd62ea851c4e20f8dab56b7437f27d05b60"} Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.292238 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mjfmp" event={"ID":"43b17f72-4406-4ea9-99b5-6683ee119e5a","Type":"ContainerStarted","Data":"3085ece9f7b2aa819d47714297c7401c979599560861bac83b499e1ea4e0e05a"} Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.303207 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hqkkq" event={"ID":"9235e424-26c2-4a58-8347-6eeabd8fc282","Type":"ContainerStarted","Data":"6c336db3e471e15e7c1efba9787cea618baf627b325cef78f06a7edfaf107fa8"} Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.303972 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:44 crc kubenswrapper[4789]: E1124 11:32:44.304065 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:44.804044994 +0000 UTC m=+147.386516373 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.304300 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:44 crc kubenswrapper[4789]: E1124 11:32:44.305426 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:44.805410911 +0000 UTC m=+147.387882380 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.309035 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-72rck" podStartSLOduration=120.309020561 podStartE2EDuration="2m0.309020561s" podCreationTimestamp="2025-11-24 11:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:44.254740582 +0000 UTC m=+146.837211951" watchObservedRunningTime="2025-11-24 11:32:44.309020561 +0000 UTC m=+146.891491940" Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.317946 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-hk9wh" event={"ID":"666ba159-709e-4b10-8d3d-6a7ae785f61f","Type":"ContainerStarted","Data":"4826378e09d76fc73dda56c9f3c6812f1fe8b3287f0a2d86034523f9bf8422af"} Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.323127 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xt8qf" event={"ID":"153e2e1a-8390-42f3-b959-d3607dfef848","Type":"ContainerStarted","Data":"7328f0dc32b0252acaea65c64d0d661d24d6e9f8f8d72b06b52b8e4462a47d69"} Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.323680 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-xt8qf" Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.324814 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-t2scc" event={"ID":"1318a733-4e15-40bc-a40c-da929809e25c","Type":"ContainerStarted","Data":"b10d307a2cdefed0a8abc49a1120d34ed64664ecd6ba2eadde6169036ba2dc3a"} Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.325395 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-t2scc" Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.326785 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j6s5s" event={"ID":"9aeda001-70e0-4e29-b122-e75d98325c1d","Type":"ContainerStarted","Data":"27fbcf7f73a2b7d17b67265ad187341441991820d82a069ba2ca03d9d64eacbf"} Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.335874 4789 patch_prober.go:28] interesting pod/console-operator-58897d9998-t2scc container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/readyz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.335916 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-t2scc" podUID="1318a733-4e15-40bc-a40c-da929809e25c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/readyz\": dial tcp 10.217.0.29:8443: connect: connection refused" Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.342195 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j6s5s" Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.344206 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pcnqw" event={"ID":"b2fe1c31-7dc8-4f55-b853-15de35052479","Type":"ContainerStarted","Data":"704e87d974266ea26a3217a16ede443e29f878cca62242a79696c30e7ed86b7c"} Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.352261 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmvs5" event={"ID":"7f023c49-9ed6-4ed3-a6ce-560c3fcb3a58","Type":"ContainerStarted","Data":"be348b8ed797fa55fe909f67b57606b8e39fc05951f81e155a48fdd348ffa5c2"} Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.361373 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-klw64" event={"ID":"d90e94ec-ea22-4ba7-a0b0-7b636dcccf9c","Type":"ContainerStarted","Data":"c459c04e9a589e67f722986f7925b2af7c64f0ff0c93f9bc2e1b6a6e9c8175ba"} Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.388198 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tpbjs" event={"ID":"7d1b1c88-f1c8-4795-9fed-f3424b1355fa","Type":"ContainerStarted","Data":"eab7b6b6f03db5b546f46b7fa6676654bf673f9ba2b2a7f7bb0bdc6df9d1bb58"} Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.389052 4789 patch_prober.go:28] interesting pod/downloads-7954f5f757-mlcwl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.389090 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mlcwl" podUID="c20b0775-ba72-4379-b5df-2ff35ffc2704" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.389615 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tpbjs" Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.390611 4789 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tpbjs container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.390664 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tpbjs" podUID="7d1b1c88-f1c8-4795-9fed-f3424b1355fa" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.406084 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:44 crc kubenswrapper[4789]: E1124 11:32:44.406855 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:44.906841374 +0000 UTC m=+147.489312743 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.418740 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-svr79" podStartSLOduration=121.418711281 podStartE2EDuration="2m1.418711281s" podCreationTimestamp="2025-11-24 11:30:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:44.311287184 +0000 UTC m=+146.893758563" watchObservedRunningTime="2025-11-24 11:32:44.418711281 +0000 UTC m=+147.001182660" Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.419637 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rqvqs" podStartSLOduration=121.419631917 podStartE2EDuration="2m1.419631917s" podCreationTimestamp="2025-11-24 11:30:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:44.409931319 +0000 UTC m=+146.992402698" watchObservedRunningTime="2025-11-24 11:32:44.419631917 +0000 UTC m=+147.002103296" Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.440189 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-69txp" podStartSLOduration=120.440174185 podStartE2EDuration="2m0.440174185s" podCreationTimestamp="2025-11-24 11:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:44.434190129 +0000 UTC m=+147.016661508" watchObservedRunningTime="2025-11-24 11:32:44.440174185 +0000 UTC m=+147.022645564" Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.507695 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:44 crc kubenswrapper[4789]: E1124 11:32:44.528699 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:45.02866984 +0000 UTC m=+147.611141219 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.535645 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-xf9qh" podStartSLOduration=120.535627282 podStartE2EDuration="2m0.535627282s" podCreationTimestamp="2025-11-24 11:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:44.533425031 +0000 UTC m=+147.115896410" watchObservedRunningTime="2025-11-24 11:32:44.535627282 +0000 UTC m=+147.118098661" Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.608117 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-hk9wh" podStartSLOduration=7.608099814 podStartE2EDuration="7.608099814s" podCreationTimestamp="2025-11-24 11:32:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:44.605883453 +0000 UTC m=+147.188354832" watchObservedRunningTime="2025-11-24 11:32:44.608099814 +0000 UTC m=+147.190571193" Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.608498 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:44 crc kubenswrapper[4789]: E1124 11:32:44.608864 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:45.108851625 +0000 UTC m=+147.691323004 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.703947 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tpbjs" podStartSLOduration=120.703927651 podStartE2EDuration="2m0.703927651s" podCreationTimestamp="2025-11-24 11:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:44.70278719 +0000 UTC m=+147.285258569" watchObservedRunningTime="2025-11-24 11:32:44.703927651 +0000 UTC m=+147.286399030" Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.704129 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-klw64" podStartSLOduration=120.704123787 podStartE2EDuration="2m0.704123787s" podCreationTimestamp="2025-11-24 11:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:44.651646567 +0000 UTC m=+147.234117946" watchObservedRunningTime="2025-11-24 11:32:44.704123787 +0000 UTC m=+147.286595166" Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.712219 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:44 crc kubenswrapper[4789]: E1124 11:32:44.712631 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:45.212614601 +0000 UTC m=+147.795085980 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.743490 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmvs5" podStartSLOduration=121.743458543 podStartE2EDuration="2m1.743458543s" podCreationTimestamp="2025-11-24 11:30:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:44.737442517 +0000 UTC m=+147.319913886" watchObservedRunningTime="2025-11-24 11:32:44.743458543 +0000 UTC m=+147.325929922" Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.805042 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-t2scc" podStartSLOduration=121.805025524 podStartE2EDuration="2m1.805025524s" podCreationTimestamp="2025-11-24 11:30:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:44.804755197 +0000 UTC m=+147.387226566" watchObservedRunningTime="2025-11-24 11:32:44.805025524 +0000 UTC m=+147.387496903" Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.806347 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jdbnn" podStartSLOduration=120.806341911 podStartE2EDuration="2m0.806341911s" podCreationTimestamp="2025-11-24 11:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:44.775809277 +0000 UTC m=+147.358280656" watchObservedRunningTime="2025-11-24 11:32:44.806341911 +0000 UTC m=+147.388813290" Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.812792 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:44 crc kubenswrapper[4789]: E1124 11:32:44.813067 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:45.313053116 +0000 UTC m=+147.895524495 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.884578 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hqkkq" podStartSLOduration=121.884562582 podStartE2EDuration="2m1.884562582s" podCreationTimestamp="2025-11-24 11:30:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:44.864733074 +0000 UTC m=+147.447204453" watchObservedRunningTime="2025-11-24 11:32:44.884562582 +0000 UTC m=+147.467033951" Nov 24 11:32:44 crc kubenswrapper[4789]: I1124 11:32:44.914142 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:44 crc kubenswrapper[4789]: E1124 11:32:44.914683 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:45.414664454 +0000 UTC m=+147.997135873 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.015626 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:45 crc kubenswrapper[4789]: E1124 11:32:45.015819 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:45.515792778 +0000 UTC m=+148.098264157 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.015911 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:45 crc kubenswrapper[4789]: E1124 11:32:45.016182 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:45.516175188 +0000 UTC m=+148.098646567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.016293 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-xt8qf" podStartSLOduration=8.016275911 podStartE2EDuration="8.016275911s" podCreationTimestamp="2025-11-24 11:32:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:45.00718178 +0000 UTC m=+147.589653159" watchObservedRunningTime="2025-11-24 11:32:45.016275911 +0000 UTC m=+147.598747290" Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.071045 4789 patch_prober.go:28] interesting pod/router-default-5444994796-h8dsm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:32:45 crc kubenswrapper[4789]: [-]has-synced failed: reason withheld Nov 24 11:32:45 crc kubenswrapper[4789]: [+]process-running ok Nov 24 11:32:45 crc kubenswrapper[4789]: healthz check failed Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.071105 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h8dsm" podUID="1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.116989 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:45 crc kubenswrapper[4789]: E1124 11:32:45.117390 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:45.617375914 +0000 UTC m=+148.199847293 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.218678 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:45 crc kubenswrapper[4789]: E1124 11:32:45.218947 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:45.71893649 +0000 UTC m=+148.301407869 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.320293 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:45 crc kubenswrapper[4789]: E1124 11:32:45.320386 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:45.820371262 +0000 UTC m=+148.402842641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.320622 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:45 crc kubenswrapper[4789]: E1124 11:32:45.320897 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:45.820890117 +0000 UTC m=+148.403361496 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.394204 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rqvqs" event={"ID":"98d60ae9-773d-4bb7-8dd6-5de5b42bbcc9","Type":"ContainerStarted","Data":"1c2858d417840db4ad6d728f05d26e927565c5955bb4dcebf2a0843886480c09"} Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.396075 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hqkkq" event={"ID":"9235e424-26c2-4a58-8347-6eeabd8fc282","Type":"ContainerStarted","Data":"682676d5de259a68e488f8033aaed6f7a9f3971e822a57c576f6703002573e19"} Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.397161 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wkkmt" event={"ID":"2e152bba-2c0e-4f46-8bc9-279649243e6c","Type":"ContainerStarted","Data":"48df2d1fa9670019d50d158ef62d607bf86435ff1883562fba2be333f8810d98"} Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.398595 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pcnqw" event={"ID":"b2fe1c31-7dc8-4f55-b853-15de35052479","Type":"ContainerStarted","Data":"f860484a3ae9f8b0edd53e4f72439a909c690b7e66ea2e8665b20e184cca2054"} Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.399081 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pcnqw" Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.401029 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" event={"ID":"22cf157e-ce67-43f4-bbaf-577720728887","Type":"ContainerStarted","Data":"bfbef0afe4b2279eaf2b55e2f89b906a6f29235471b0f444d7e684e3e794a7e5"} Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.401050 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" event={"ID":"22cf157e-ce67-43f4-bbaf-577720728887","Type":"ContainerStarted","Data":"ec1042a8f14992d1f26fdfb4f7eee5821a9862817e3a0f5890b198ec8ec5d736"} Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.417237 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-k4s28" event={"ID":"04826e9c-2f6b-4215-b334-c52ee5f5e150","Type":"ContainerStarted","Data":"69142fb65e804589a19059f142ca8628ad8e0926656754b333ac8a3ab2999067"} Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.422266 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:45 crc kubenswrapper[4789]: E1124 11:32:45.422489 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:45.922439732 +0000 UTC m=+148.504911101 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.422557 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:45 crc kubenswrapper[4789]: E1124 11:32:45.422881 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:45.922874394 +0000 UTC m=+148.505345763 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.422980 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xt8qf" event={"ID":"153e2e1a-8390-42f3-b959-d3607dfef848","Type":"ContainerStarted","Data":"83239db4907725d8e66f646359d1f2011429a9364479e49de78315e7b04ae8a3"} Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.424607 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-5cgnl" event={"ID":"2cb92340-d666-48d7-8b9e-5f25c48b546f","Type":"ContainerStarted","Data":"876765f27049e1902cb84a19685cfb6cbef686527728e5e87bcfb5b42d4708c6"} Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.428339 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g7l4l" event={"ID":"cbf039a2-0b1a-4284-9e4f-30178313bb09","Type":"ContainerStarted","Data":"c5094cf9068884e2d39601392726daf7278ee23b4fae88623a974671444ff7d8"} Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.430758 4789 generic.go:334] "Generic (PLEG): container finished" podID="c51acce1-f5f7-44d8-aadf-ae468cf2e29b" containerID="4623592cea64378ecbebfdd646e0ed0cedeb82b45bc21203235fef69e62288f2" exitCode=0 Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.430812 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-77vnb" event={"ID":"c51acce1-f5f7-44d8-aadf-ae468cf2e29b","Type":"ContainerDied","Data":"4623592cea64378ecbebfdd646e0ed0cedeb82b45bc21203235fef69e62288f2"} Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.433155 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-69txp" event={"ID":"c02b858d-680d-415a-be28-5f382cdaaac1","Type":"ContainerStarted","Data":"81f8ae76d70b7cfa53ee374cc2cad30d8b9aaad28868d8090e09bfc97abed512"} Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.436106 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mjfmp" event={"ID":"43b17f72-4406-4ea9-99b5-6683ee119e5a","Type":"ContainerStarted","Data":"6be72e801b4fe2008b198292d9e03f37697c87e6fa263824cb91e396f282a84c"} Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.442707 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-spvgg" event={"ID":"bb760fa5-0dd1-4298-87de-d2cb1a0d3e0b","Type":"ContainerStarted","Data":"a7008220426d6b65147283cf01cae0f22b89fc6ca1cc4ff0d792045805298ec9"} Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.442741 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-spvgg" Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.448585 4789 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xf9qh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.448634 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xf9qh" podUID="48ee479a-ea6a-4831-858a-1cdfaca6762c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.449678 4789 patch_prober.go:28] interesting pod/downloads-7954f5f757-mlcwl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.449740 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mlcwl" podUID="c20b0775-ba72-4379-b5df-2ff35ffc2704" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.475173 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tpbjs" Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.523254 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:45 crc kubenswrapper[4789]: E1124 11:32:45.523395 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:46.02337672 +0000 UTC m=+148.605848099 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.525772 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:45 crc kubenswrapper[4789]: E1124 11:32:45.528202 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:46.028185974 +0000 UTC m=+148.610657453 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.634329 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:45 crc kubenswrapper[4789]: E1124 11:32:45.634536 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:46.134511621 +0000 UTC m=+148.716983000 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.634629 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:45 crc kubenswrapper[4789]: E1124 11:32:45.635009 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:46.134991494 +0000 UTC m=+148.717462873 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.664343 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pcnqw" podStartSLOduration=121.664319584 podStartE2EDuration="2m1.664319584s" podCreationTimestamp="2025-11-24 11:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:45.500809287 +0000 UTC m=+148.083280666" watchObservedRunningTime="2025-11-24 11:32:45.664319584 +0000 UTC m=+148.246790963" Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.670322 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-5cgnl" podStartSLOduration=121.67030886 podStartE2EDuration="2m1.67030886s" podCreationTimestamp="2025-11-24 11:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:45.654639637 +0000 UTC m=+148.237111016" watchObservedRunningTime="2025-11-24 11:32:45.67030886 +0000 UTC m=+148.252780239" Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.724478 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j4dj6" Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.736052 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:45 crc kubenswrapper[4789]: E1124 11:32:45.736575 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:46.236554841 +0000 UTC m=+148.819026220 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.814983 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-spvgg" podStartSLOduration=122.814968277 podStartE2EDuration="2m2.814968277s" podCreationTimestamp="2025-11-24 11:30:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:45.813864656 +0000 UTC m=+148.396336035" watchObservedRunningTime="2025-11-24 11:32:45.814968277 +0000 UTC m=+148.397439656" Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.838142 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:45 crc kubenswrapper[4789]: E1124 11:32:45.838413 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:46.338401863 +0000 UTC m=+148.920873242 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.939497 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:45 crc kubenswrapper[4789]: E1124 11:32:45.939700 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:46.439673332 +0000 UTC m=+149.022144711 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:45 crc kubenswrapper[4789]: I1124 11:32:45.939896 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:45 crc kubenswrapper[4789]: E1124 11:32:45.940267 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:46.440258868 +0000 UTC m=+149.022730247 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.040837 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:46 crc kubenswrapper[4789]: E1124 11:32:46.041200 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:46.541184836 +0000 UTC m=+149.123656215 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.067281 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mjfmp" podStartSLOduration=122.067262057 podStartE2EDuration="2m2.067262057s" podCreationTimestamp="2025-11-24 11:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:45.952678331 +0000 UTC m=+148.535149710" watchObservedRunningTime="2025-11-24 11:32:46.067262057 +0000 UTC m=+148.649733436" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.077769 4789 patch_prober.go:28] interesting pod/router-default-5444994796-h8dsm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:32:46 crc kubenswrapper[4789]: [-]has-synced failed: reason withheld Nov 24 11:32:46 crc kubenswrapper[4789]: [+]process-running ok Nov 24 11:32:46 crc kubenswrapper[4789]: healthz check failed Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.077831 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h8dsm" podUID="1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.142800 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.142849 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.142901 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.142952 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.142991 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.145139 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:32:46 crc kubenswrapper[4789]: E1124 11:32:46.145450 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:46.645435857 +0000 UTC m=+149.227907236 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.162420 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-t2scc" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.163596 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.163990 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.167211 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.195108 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" podStartSLOduration=123.195091978 podStartE2EDuration="2m3.195091978s" podCreationTimestamp="2025-11-24 11:30:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:46.068889162 +0000 UTC m=+148.651360541" watchObservedRunningTime="2025-11-24 11:32:46.195091978 +0000 UTC m=+148.777563357" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.199090 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.238736 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.245892 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.246415 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:46 crc kubenswrapper[4789]: E1124 11:32:46.246715 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:46.746702024 +0000 UTC m=+149.329173403 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.252092 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g7l4l" podStartSLOduration=122.252075753 podStartE2EDuration="2m2.252075753s" podCreationTimestamp="2025-11-24 11:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:46.195744767 +0000 UTC m=+148.778216146" watchObservedRunningTime="2025-11-24 11:32:46.252075753 +0000 UTC m=+148.834547132" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.261738 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-plzxk"] Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.262725 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-plzxk" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.276020 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.278986 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-plzxk"] Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.353938 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33ef3ee1-1338-4ca5-b290-ea83723c547e-catalog-content\") pod \"community-operators-plzxk\" (UID: \"33ef3ee1-1338-4ca5-b290-ea83723c547e\") " pod="openshift-marketplace/community-operators-plzxk" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.354016 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tcsq\" (UniqueName: \"kubernetes.io/projected/33ef3ee1-1338-4ca5-b290-ea83723c547e-kube-api-access-8tcsq\") pod \"community-operators-plzxk\" (UID: \"33ef3ee1-1338-4ca5-b290-ea83723c547e\") " pod="openshift-marketplace/community-operators-plzxk" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.354076 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.354098 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33ef3ee1-1338-4ca5-b290-ea83723c547e-utilities\") pod \"community-operators-plzxk\" (UID: \"33ef3ee1-1338-4ca5-b290-ea83723c547e\") " pod="openshift-marketplace/community-operators-plzxk" Nov 24 11:32:46 crc kubenswrapper[4789]: E1124 11:32:46.354459 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:46.854440411 +0000 UTC m=+149.436911800 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.455383 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.455536 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33ef3ee1-1338-4ca5-b290-ea83723c547e-catalog-content\") pod \"community-operators-plzxk\" (UID: \"33ef3ee1-1338-4ca5-b290-ea83723c547e\") " pod="openshift-marketplace/community-operators-plzxk" Nov 24 11:32:46 crc kubenswrapper[4789]: E1124 11:32:46.455573 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:46.955545564 +0000 UTC m=+149.538016963 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.455608 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tcsq\" (UniqueName: \"kubernetes.io/projected/33ef3ee1-1338-4ca5-b290-ea83723c547e-kube-api-access-8tcsq\") pod \"community-operators-plzxk\" (UID: \"33ef3ee1-1338-4ca5-b290-ea83723c547e\") " pod="openshift-marketplace/community-operators-plzxk" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.455673 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.455693 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33ef3ee1-1338-4ca5-b290-ea83723c547e-utilities\") pod \"community-operators-plzxk\" (UID: \"33ef3ee1-1338-4ca5-b290-ea83723c547e\") " pod="openshift-marketplace/community-operators-plzxk" Nov 24 11:32:46 crc kubenswrapper[4789]: E1124 11:32:46.455951 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:46.955940524 +0000 UTC m=+149.538411903 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.456006 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33ef3ee1-1338-4ca5-b290-ea83723c547e-catalog-content\") pod \"community-operators-plzxk\" (UID: \"33ef3ee1-1338-4ca5-b290-ea83723c547e\") " pod="openshift-marketplace/community-operators-plzxk" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.456178 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33ef3ee1-1338-4ca5-b290-ea83723c547e-utilities\") pod \"community-operators-plzxk\" (UID: \"33ef3ee1-1338-4ca5-b290-ea83723c547e\") " pod="openshift-marketplace/community-operators-plzxk" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.460488 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wkkmt" event={"ID":"2e152bba-2c0e-4f46-8bc9-279649243e6c","Type":"ContainerStarted","Data":"f62076d8c8142de8d47fb8f52f287d513970c78ddbe3819f1c6727317109c58a"} Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.465074 4789 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xf9qh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.465106 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xf9qh" podUID="48ee479a-ea6a-4831-858a-1cdfaca6762c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.468233 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6tlsz"] Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.469111 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6tlsz" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.502896 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.522146 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tcsq\" (UniqueName: \"kubernetes.io/projected/33ef3ee1-1338-4ca5-b290-ea83723c547e-kube-api-access-8tcsq\") pod \"community-operators-plzxk\" (UID: \"33ef3ee1-1338-4ca5-b290-ea83723c547e\") " pod="openshift-marketplace/community-operators-plzxk" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.553739 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6tlsz"] Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.556701 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.556980 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vcjw\" (UniqueName: \"kubernetes.io/projected/de46ba5d-4892-4797-bec0-edb2aadce87f-kube-api-access-9vcjw\") pod \"certified-operators-6tlsz\" (UID: \"de46ba5d-4892-4797-bec0-edb2aadce87f\") " pod="openshift-marketplace/certified-operators-6tlsz" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.557094 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de46ba5d-4892-4797-bec0-edb2aadce87f-catalog-content\") pod \"certified-operators-6tlsz\" (UID: \"de46ba5d-4892-4797-bec0-edb2aadce87f\") " pod="openshift-marketplace/certified-operators-6tlsz" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.557392 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de46ba5d-4892-4797-bec0-edb2aadce87f-utilities\") pod \"certified-operators-6tlsz\" (UID: \"de46ba5d-4892-4797-bec0-edb2aadce87f\") " pod="openshift-marketplace/certified-operators-6tlsz" Nov 24 11:32:46 crc kubenswrapper[4789]: E1124 11:32:46.558150 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:47.058130948 +0000 UTC m=+149.640602317 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.601749 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-plzxk" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.658389 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de46ba5d-4892-4797-bec0-edb2aadce87f-utilities\") pod \"certified-operators-6tlsz\" (UID: \"de46ba5d-4892-4797-bec0-edb2aadce87f\") " pod="openshift-marketplace/certified-operators-6tlsz" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.659209 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de46ba5d-4892-4797-bec0-edb2aadce87f-utilities\") pod \"certified-operators-6tlsz\" (UID: \"de46ba5d-4892-4797-bec0-edb2aadce87f\") " pod="openshift-marketplace/certified-operators-6tlsz" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.659236 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vcjw\" (UniqueName: \"kubernetes.io/projected/de46ba5d-4892-4797-bec0-edb2aadce87f-kube-api-access-9vcjw\") pod \"certified-operators-6tlsz\" (UID: \"de46ba5d-4892-4797-bec0-edb2aadce87f\") " pod="openshift-marketplace/certified-operators-6tlsz" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.659266 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de46ba5d-4892-4797-bec0-edb2aadce87f-catalog-content\") pod \"certified-operators-6tlsz\" (UID: \"de46ba5d-4892-4797-bec0-edb2aadce87f\") " pod="openshift-marketplace/certified-operators-6tlsz" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.659310 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:46 crc kubenswrapper[4789]: E1124 11:32:46.659542 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:47.15953273 +0000 UTC m=+149.742004109 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.659757 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de46ba5d-4892-4797-bec0-edb2aadce87f-catalog-content\") pod \"certified-operators-6tlsz\" (UID: \"de46ba5d-4892-4797-bec0-edb2aadce87f\") " pod="openshift-marketplace/certified-operators-6tlsz" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.754751 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4z9g4"] Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.756286 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4z9g4" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.760069 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:46 crc kubenswrapper[4789]: E1124 11:32:46.760601 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:47.260581041 +0000 UTC m=+149.843052420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.789661 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vcjw\" (UniqueName: \"kubernetes.io/projected/de46ba5d-4892-4797-bec0-edb2aadce87f-kube-api-access-9vcjw\") pod \"certified-operators-6tlsz\" (UID: \"de46ba5d-4892-4797-bec0-edb2aadce87f\") " pod="openshift-marketplace/certified-operators-6tlsz" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.793723 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6tlsz" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.803872 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4z9g4"] Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.864514 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvh4t\" (UniqueName: \"kubernetes.io/projected/f176cbf2-3781-402f-a415-7f4d25eea239-kube-api-access-qvh4t\") pod \"community-operators-4z9g4\" (UID: \"f176cbf2-3781-402f-a415-7f4d25eea239\") " pod="openshift-marketplace/community-operators-4z9g4" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.864551 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.864594 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f176cbf2-3781-402f-a415-7f4d25eea239-catalog-content\") pod \"community-operators-4z9g4\" (UID: \"f176cbf2-3781-402f-a415-7f4d25eea239\") " pod="openshift-marketplace/community-operators-4z9g4" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.864627 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f176cbf2-3781-402f-a415-7f4d25eea239-utilities\") pod \"community-operators-4z9g4\" (UID: \"f176cbf2-3781-402f-a415-7f4d25eea239\") " pod="openshift-marketplace/community-operators-4z9g4" Nov 24 11:32:46 crc kubenswrapper[4789]: E1124 11:32:46.864958 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:47.364912464 +0000 UTC m=+149.947383843 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.919018 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dsbrt"] Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.920005 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dsbrt" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.966578 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.966854 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvh4t\" (UniqueName: \"kubernetes.io/projected/f176cbf2-3781-402f-a415-7f4d25eea239-kube-api-access-qvh4t\") pod \"community-operators-4z9g4\" (UID: \"f176cbf2-3781-402f-a415-7f4d25eea239\") " pod="openshift-marketplace/community-operators-4z9g4" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.966911 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f176cbf2-3781-402f-a415-7f4d25eea239-catalog-content\") pod \"community-operators-4z9g4\" (UID: \"f176cbf2-3781-402f-a415-7f4d25eea239\") " pod="openshift-marketplace/community-operators-4z9g4" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.966946 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f176cbf2-3781-402f-a415-7f4d25eea239-utilities\") pod \"community-operators-4z9g4\" (UID: \"f176cbf2-3781-402f-a415-7f4d25eea239\") " pod="openshift-marketplace/community-operators-4z9g4" Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.967825 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f176cbf2-3781-402f-a415-7f4d25eea239-utilities\") pod \"community-operators-4z9g4\" (UID: \"f176cbf2-3781-402f-a415-7f4d25eea239\") " pod="openshift-marketplace/community-operators-4z9g4" Nov 24 11:32:46 crc kubenswrapper[4789]: E1124 11:32:46.967920 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:47.467900249 +0000 UTC m=+150.050371628 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:46 crc kubenswrapper[4789]: I1124 11:32:46.968383 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f176cbf2-3781-402f-a415-7f4d25eea239-catalog-content\") pod \"community-operators-4z9g4\" (UID: \"f176cbf2-3781-402f-a415-7f4d25eea239\") " pod="openshift-marketplace/community-operators-4z9g4" Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.025504 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dsbrt"] Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.070725 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvh4t\" (UniqueName: \"kubernetes.io/projected/f176cbf2-3781-402f-a415-7f4d25eea239-kube-api-access-qvh4t\") pod \"community-operators-4z9g4\" (UID: \"f176cbf2-3781-402f-a415-7f4d25eea239\") " pod="openshift-marketplace/community-operators-4z9g4" Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.070880 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d203f144-c8d5-46fb-8139-3af59a00c0c9-utilities\") pod \"certified-operators-dsbrt\" (UID: \"d203f144-c8d5-46fb-8139-3af59a00c0c9\") " pod="openshift-marketplace/certified-operators-dsbrt" Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.070946 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmpn5\" (UniqueName: \"kubernetes.io/projected/d203f144-c8d5-46fb-8139-3af59a00c0c9-kube-api-access-wmpn5\") pod \"certified-operators-dsbrt\" (UID: \"d203f144-c8d5-46fb-8139-3af59a00c0c9\") " pod="openshift-marketplace/certified-operators-dsbrt" Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.070965 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d203f144-c8d5-46fb-8139-3af59a00c0c9-catalog-content\") pod \"certified-operators-dsbrt\" (UID: \"d203f144-c8d5-46fb-8139-3af59a00c0c9\") " pod="openshift-marketplace/certified-operators-dsbrt" Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.070996 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:47 crc kubenswrapper[4789]: E1124 11:32:47.071285 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:47.571274015 +0000 UTC m=+150.153745394 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.073220 4789 patch_prober.go:28] interesting pod/router-default-5444994796-h8dsm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:32:47 crc kubenswrapper[4789]: [-]has-synced failed: reason withheld Nov 24 11:32:47 crc kubenswrapper[4789]: [+]process-running ok Nov 24 11:32:47 crc kubenswrapper[4789]: healthz check failed Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.073250 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h8dsm" podUID="1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.091804 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4z9g4" Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.173255 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.173434 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmpn5\" (UniqueName: \"kubernetes.io/projected/d203f144-c8d5-46fb-8139-3af59a00c0c9-kube-api-access-wmpn5\") pod \"certified-operators-dsbrt\" (UID: \"d203f144-c8d5-46fb-8139-3af59a00c0c9\") " pod="openshift-marketplace/certified-operators-dsbrt" Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.173461 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d203f144-c8d5-46fb-8139-3af59a00c0c9-catalog-content\") pod \"certified-operators-dsbrt\" (UID: \"d203f144-c8d5-46fb-8139-3af59a00c0c9\") " pod="openshift-marketplace/certified-operators-dsbrt" Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.173566 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d203f144-c8d5-46fb-8139-3af59a00c0c9-utilities\") pod \"certified-operators-dsbrt\" (UID: \"d203f144-c8d5-46fb-8139-3af59a00c0c9\") " pod="openshift-marketplace/certified-operators-dsbrt" Nov 24 11:32:47 crc kubenswrapper[4789]: E1124 11:32:47.174002 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:47.673985062 +0000 UTC m=+150.256456441 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.174018 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d203f144-c8d5-46fb-8139-3af59a00c0c9-utilities\") pod \"certified-operators-dsbrt\" (UID: \"d203f144-c8d5-46fb-8139-3af59a00c0c9\") " pod="openshift-marketplace/certified-operators-dsbrt" Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.174432 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d203f144-c8d5-46fb-8139-3af59a00c0c9-catalog-content\") pod \"certified-operators-dsbrt\" (UID: \"d203f144-c8d5-46fb-8139-3af59a00c0c9\") " pod="openshift-marketplace/certified-operators-dsbrt" Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.223929 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmpn5\" (UniqueName: \"kubernetes.io/projected/d203f144-c8d5-46fb-8139-3af59a00c0c9-kube-api-access-wmpn5\") pod \"certified-operators-dsbrt\" (UID: \"d203f144-c8d5-46fb-8139-3af59a00c0c9\") " pod="openshift-marketplace/certified-operators-dsbrt" Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.273775 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dsbrt" Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.274739 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:47 crc kubenswrapper[4789]: E1124 11:32:47.275008 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:47.774997303 +0000 UTC m=+150.357468682 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.375689 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:47 crc kubenswrapper[4789]: E1124 11:32:47.376042 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:47.876012824 +0000 UTC m=+150.458484203 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.376174 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:47 crc kubenswrapper[4789]: E1124 11:32:47.376441 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:47.876429095 +0000 UTC m=+150.458900474 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.476990 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:47 crc kubenswrapper[4789]: E1124 11:32:47.477354 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:47.977336463 +0000 UTC m=+150.559807842 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.489544 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wkkmt" event={"ID":"2e152bba-2c0e-4f46-8bc9-279649243e6c","Type":"ContainerStarted","Data":"c69da056ea79fb82184a68f32dc2d7efc6cbbe119c9251aa242f6b6842af4460"} Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.582219 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:47 crc kubenswrapper[4789]: E1124 11:32:47.584301 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:48.084284607 +0000 UTC m=+150.666755986 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.683339 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:47 crc kubenswrapper[4789]: E1124 11:32:47.683548 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:48.183525019 +0000 UTC m=+150.765996398 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.683608 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:47 crc kubenswrapper[4789]: E1124 11:32:47.683872 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:48.183861218 +0000 UTC m=+150.766332597 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.784407 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:47 crc kubenswrapper[4789]: E1124 11:32:47.784595 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:48.284566111 +0000 UTC m=+150.867037490 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.784985 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:47 crc kubenswrapper[4789]: E1124 11:32:47.785371 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:48.285332822 +0000 UTC m=+150.867804201 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.811917 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.812562 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.817926 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.818119 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.855330 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-77vnb" Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.887134 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlpjj\" (UniqueName: \"kubernetes.io/projected/c51acce1-f5f7-44d8-aadf-ae468cf2e29b-kube-api-access-jlpjj\") pod \"c51acce1-f5f7-44d8-aadf-ae468cf2e29b\" (UID: \"c51acce1-f5f7-44d8-aadf-ae468cf2e29b\") " Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.887208 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c51acce1-f5f7-44d8-aadf-ae468cf2e29b-config-volume\") pod \"c51acce1-f5f7-44d8-aadf-ae468cf2e29b\" (UID: \"c51acce1-f5f7-44d8-aadf-ae468cf2e29b\") " Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.887295 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c51acce1-f5f7-44d8-aadf-ae468cf2e29b-secret-volume\") pod \"c51acce1-f5f7-44d8-aadf-ae468cf2e29b\" (UID: \"c51acce1-f5f7-44d8-aadf-ae468cf2e29b\") " Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.887385 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.887518 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4149d0c4-d229-42bf-a53b-e1800c70946a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4149d0c4-d229-42bf-a53b-e1800c70946a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.887564 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4149d0c4-d229-42bf-a53b-e1800c70946a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4149d0c4-d229-42bf-a53b-e1800c70946a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.895264 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c51acce1-f5f7-44d8-aadf-ae468cf2e29b-kube-api-access-jlpjj" (OuterVolumeSpecName: "kube-api-access-jlpjj") pod "c51acce1-f5f7-44d8-aadf-ae468cf2e29b" (UID: "c51acce1-f5f7-44d8-aadf-ae468cf2e29b"). InnerVolumeSpecName "kube-api-access-jlpjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.895624 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c51acce1-f5f7-44d8-aadf-ae468cf2e29b-config-volume" (OuterVolumeSpecName: "config-volume") pod "c51acce1-f5f7-44d8-aadf-ae468cf2e29b" (UID: "c51acce1-f5f7-44d8-aadf-ae468cf2e29b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:32:47 crc kubenswrapper[4789]: E1124 11:32:47.895895 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:48.395869866 +0000 UTC m=+150.978341245 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.898080 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c51acce1-f5f7-44d8-aadf-ae468cf2e29b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c51acce1-f5f7-44d8-aadf-ae468cf2e29b" (UID: "c51acce1-f5f7-44d8-aadf-ae468cf2e29b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.900014 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.991112 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4149d0c4-d229-42bf-a53b-e1800c70946a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4149d0c4-d229-42bf-a53b-e1800c70946a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.991374 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.991405 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4149d0c4-d229-42bf-a53b-e1800c70946a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4149d0c4-d229-42bf-a53b-e1800c70946a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.991449 4789 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c51acce1-f5f7-44d8-aadf-ae468cf2e29b-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.991476 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlpjj\" (UniqueName: \"kubernetes.io/projected/c51acce1-f5f7-44d8-aadf-ae468cf2e29b-kube-api-access-jlpjj\") on node \"crc\" DevicePath \"\"" Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.991487 4789 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c51acce1-f5f7-44d8-aadf-ae468cf2e29b-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 11:32:47 crc kubenswrapper[4789]: I1124 11:32:47.991524 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4149d0c4-d229-42bf-a53b-e1800c70946a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4149d0c4-d229-42bf-a53b-e1800c70946a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 11:32:47 crc kubenswrapper[4789]: E1124 11:32:47.991981 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:48.49197053 +0000 UTC m=+151.074441909 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.036649 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4149d0c4-d229-42bf-a53b-e1800c70946a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4149d0c4-d229-42bf-a53b-e1800c70946a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.085626 4789 patch_prober.go:28] interesting pod/router-default-5444994796-h8dsm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:32:48 crc kubenswrapper[4789]: [-]has-synced failed: reason withheld Nov 24 11:32:48 crc kubenswrapper[4789]: [+]process-running ok Nov 24 11:32:48 crc kubenswrapper[4789]: healthz check failed Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.085686 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h8dsm" podUID="1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.094209 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:48 crc kubenswrapper[4789]: E1124 11:32:48.094584 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:48.594570056 +0000 UTC m=+151.177041435 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.133837 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.195436 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:48 crc kubenswrapper[4789]: E1124 11:32:48.195740 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:48.69572772 +0000 UTC m=+151.278199099 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.300416 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:48 crc kubenswrapper[4789]: E1124 11:32:48.301809 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:48.80178012 +0000 UTC m=+151.384251499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.302078 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:48 crc kubenswrapper[4789]: E1124 11:32:48.303925 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:48.803800056 +0000 UTC m=+151.386271445 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.411959 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:48 crc kubenswrapper[4789]: E1124 11:32:48.412219 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:48.912204881 +0000 UTC m=+151.494676260 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.460604 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k7qw5"] Nov 24 11:32:48 crc kubenswrapper[4789]: E1124 11:32:48.462461 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c51acce1-f5f7-44d8-aadf-ae468cf2e29b" containerName="collect-profiles" Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.462494 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="c51acce1-f5f7-44d8-aadf-ae468cf2e29b" containerName="collect-profiles" Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.464640 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="c51acce1-f5f7-44d8-aadf-ae468cf2e29b" containerName="collect-profiles" Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.465339 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k7qw5" Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.485184 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.512523 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6e57c00-016a-45da-8988-927342153596-catalog-content\") pod \"redhat-marketplace-k7qw5\" (UID: \"f6e57c00-016a-45da-8988-927342153596\") " pod="openshift-marketplace/redhat-marketplace-k7qw5" Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.512609 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6e57c00-016a-45da-8988-927342153596-utilities\") pod \"redhat-marketplace-k7qw5\" (UID: \"f6e57c00-016a-45da-8988-927342153596\") " pod="openshift-marketplace/redhat-marketplace-k7qw5" Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.512626 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzq8j\" (UniqueName: \"kubernetes.io/projected/f6e57c00-016a-45da-8988-927342153596-kube-api-access-vzq8j\") pod \"redhat-marketplace-k7qw5\" (UID: \"f6e57c00-016a-45da-8988-927342153596\") " pod="openshift-marketplace/redhat-marketplace-k7qw5" Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.512650 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:48 crc kubenswrapper[4789]: E1124 11:32:48.513017 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:49.013004126 +0000 UTC m=+151.595475505 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.545541 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7qw5"] Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.569188 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"52008cf5b12efdf7e9e7df93aa8c0cd9cb149ce9108d01c786cc0be5609f2ae4"} Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.569234 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"9c86f3b0af6b227e169cd2e4d2fb0c177dfde36338b37977cbac5c763b8acf78"} Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.584173 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-77vnb" event={"ID":"c51acce1-f5f7-44d8-aadf-ae468cf2e29b","Type":"ContainerDied","Data":"a85b515c55ab45f44811c7cfe1b6efd67039ef2e6f50f4766f508a42a71ead64"} Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.584216 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a85b515c55ab45f44811c7cfe1b6efd67039ef2e6f50f4766f508a42a71ead64" Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.584286 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-77vnb" Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.592020 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"98686976531b0338bc32d7bd147f5b1263e1e44dae363fcdbaed7c3db66a1b0a"} Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.604144 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b6d72a9a196e1910532c765ec9fbb848d9966ff197daf955dd5c5e7ea4fde8f4"} Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.621087 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.621336 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6e57c00-016a-45da-8988-927342153596-utilities\") pod \"redhat-marketplace-k7qw5\" (UID: \"f6e57c00-016a-45da-8988-927342153596\") " pod="openshift-marketplace/redhat-marketplace-k7qw5" Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.621354 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzq8j\" (UniqueName: \"kubernetes.io/projected/f6e57c00-016a-45da-8988-927342153596-kube-api-access-vzq8j\") pod \"redhat-marketplace-k7qw5\" (UID: \"f6e57c00-016a-45da-8988-927342153596\") " pod="openshift-marketplace/redhat-marketplace-k7qw5" Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.621397 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6e57c00-016a-45da-8988-927342153596-catalog-content\") pod \"redhat-marketplace-k7qw5\" (UID: \"f6e57c00-016a-45da-8988-927342153596\") " pod="openshift-marketplace/redhat-marketplace-k7qw5" Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.622060 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6e57c00-016a-45da-8988-927342153596-catalog-content\") pod \"redhat-marketplace-k7qw5\" (UID: \"f6e57c00-016a-45da-8988-927342153596\") " pod="openshift-marketplace/redhat-marketplace-k7qw5" Nov 24 11:32:48 crc kubenswrapper[4789]: E1124 11:32:48.622122 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:49.12210814 +0000 UTC m=+151.704579519 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.622302 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6e57c00-016a-45da-8988-927342153596-utilities\") pod \"redhat-marketplace-k7qw5\" (UID: \"f6e57c00-016a-45da-8988-927342153596\") " pod="openshift-marketplace/redhat-marketplace-k7qw5" Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.690996 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzq8j\" (UniqueName: \"kubernetes.io/projected/f6e57c00-016a-45da-8988-927342153596-kube-api-access-vzq8j\") pod \"redhat-marketplace-k7qw5\" (UID: \"f6e57c00-016a-45da-8988-927342153596\") " pod="openshift-marketplace/redhat-marketplace-k7qw5" Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.720207 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-plzxk"] Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.730956 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:48 crc kubenswrapper[4789]: E1124 11:32:48.731668 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:49.231654607 +0000 UTC m=+151.814125986 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.838418 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k7qw5" Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.838597 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:48 crc kubenswrapper[4789]: E1124 11:32:48.838953 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:49.33893847 +0000 UTC m=+151.921409839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.910650 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gz4q9"] Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.922613 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-spvgg" Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.923257 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gz4q9" Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.935832 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4z9g4"] Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.943265 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46fd6317-7fed-4725-9afd-18ea159e25d2-catalog-content\") pod \"redhat-marketplace-gz4q9\" (UID: \"46fd6317-7fed-4725-9afd-18ea159e25d2\") " pod="openshift-marketplace/redhat-marketplace-gz4q9" Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.943339 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvlp7\" (UniqueName: \"kubernetes.io/projected/46fd6317-7fed-4725-9afd-18ea159e25d2-kube-api-access-pvlp7\") pod \"redhat-marketplace-gz4q9\" (UID: \"46fd6317-7fed-4725-9afd-18ea159e25d2\") " pod="openshift-marketplace/redhat-marketplace-gz4q9" Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.943367 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.943396 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46fd6317-7fed-4725-9afd-18ea159e25d2-utilities\") pod \"redhat-marketplace-gz4q9\" (UID: \"46fd6317-7fed-4725-9afd-18ea159e25d2\") " pod="openshift-marketplace/redhat-marketplace-gz4q9" Nov 24 11:32:48 crc kubenswrapper[4789]: E1124 11:32:48.943883 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:49.443867579 +0000 UTC m=+152.026338958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:48 crc kubenswrapper[4789]: I1124 11:32:48.966892 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gz4q9"] Nov 24 11:32:48 crc kubenswrapper[4789]: W1124 11:32:48.973632 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf176cbf2_3781_402f_a415_7f4d25eea239.slice/crio-ce128fc2acabf6f13b9cf10aa333e1d37931f5938521fdad76864b3b84d145c6 WatchSource:0}: Error finding container ce128fc2acabf6f13b9cf10aa333e1d37931f5938521fdad76864b3b84d145c6: Status 404 returned error can't find the container with id ce128fc2acabf6f13b9cf10aa333e1d37931f5938521fdad76864b3b84d145c6 Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.054900 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:49 crc kubenswrapper[4789]: E1124 11:32:49.055729 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:49.555709629 +0000 UTC m=+152.138181008 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.066869 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvlp7\" (UniqueName: \"kubernetes.io/projected/46fd6317-7fed-4725-9afd-18ea159e25d2-kube-api-access-pvlp7\") pod \"redhat-marketplace-gz4q9\" (UID: \"46fd6317-7fed-4725-9afd-18ea159e25d2\") " pod="openshift-marketplace/redhat-marketplace-gz4q9" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.066995 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.067115 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46fd6317-7fed-4725-9afd-18ea159e25d2-utilities\") pod \"redhat-marketplace-gz4q9\" (UID: \"46fd6317-7fed-4725-9afd-18ea159e25d2\") " pod="openshift-marketplace/redhat-marketplace-gz4q9" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.067269 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46fd6317-7fed-4725-9afd-18ea159e25d2-catalog-content\") pod \"redhat-marketplace-gz4q9\" (UID: \"46fd6317-7fed-4725-9afd-18ea159e25d2\") " pod="openshift-marketplace/redhat-marketplace-gz4q9" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.067901 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46fd6317-7fed-4725-9afd-18ea159e25d2-catalog-content\") pod \"redhat-marketplace-gz4q9\" (UID: \"46fd6317-7fed-4725-9afd-18ea159e25d2\") " pod="openshift-marketplace/redhat-marketplace-gz4q9" Nov 24 11:32:49 crc kubenswrapper[4789]: E1124 11:32:49.068448 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:49.56843691 +0000 UTC m=+152.150908289 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.068875 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46fd6317-7fed-4725-9afd-18ea159e25d2-utilities\") pod \"redhat-marketplace-gz4q9\" (UID: \"46fd6317-7fed-4725-9afd-18ea159e25d2\") " pod="openshift-marketplace/redhat-marketplace-gz4q9" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.089043 4789 patch_prober.go:28] interesting pod/router-default-5444994796-h8dsm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:32:49 crc kubenswrapper[4789]: [-]has-synced failed: reason withheld Nov 24 11:32:49 crc kubenswrapper[4789]: [+]process-running ok Nov 24 11:32:49 crc kubenswrapper[4789]: healthz check failed Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.089089 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h8dsm" podUID="1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.104624 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6tlsz"] Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.140771 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvlp7\" (UniqueName: \"kubernetes.io/projected/46fd6317-7fed-4725-9afd-18ea159e25d2-kube-api-access-pvlp7\") pod \"redhat-marketplace-gz4q9\" (UID: \"46fd6317-7fed-4725-9afd-18ea159e25d2\") " pod="openshift-marketplace/redhat-marketplace-gz4q9" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.171192 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:49 crc kubenswrapper[4789]: E1124 11:32:49.171703 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:49.671680063 +0000 UTC m=+152.254151442 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.208634 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dsbrt"] Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.256029 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jdbnn" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.256260 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jdbnn" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.273359 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:49 crc kubenswrapper[4789]: E1124 11:32:49.273658 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:49.77364696 +0000 UTC m=+152.356118339 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.297643 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.297697 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.301979 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gz4q9" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.308136 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jdbnn" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.363072 4789 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.386817 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:49 crc kubenswrapper[4789]: E1124 11:32:49.389212 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:32:49.889184702 +0000 UTC m=+152.471656081 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.396916 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-ljwn7" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.400364 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-ljwn7" Nov 24 11:32:49 crc kubenswrapper[4789]: E1124 11:32:49.400134 4789 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33ef3ee1_1338_4ca5_b290_ea83723c547e.slice/crio-bce30429f0622abc36c590a75290ff414c6740a6132911eef84810f640e59ad3.scope\": RecentStats: unable to find data in memory cache]" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.399865 4789 patch_prober.go:28] interesting pod/console-f9d7485db-ljwn7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.400676 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-ljwn7" podUID="c9a07607-7a0f-4436-a3bc-9bd2cbf61663" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.419647 4789 patch_prober.go:28] interesting pod/downloads-7954f5f757-mlcwl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.419698 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mlcwl" podUID="c20b0775-ba72-4379-b5df-2ff35ffc2704" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.420339 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.480029 4789 patch_prober.go:28] interesting pod/downloads-7954f5f757-mlcwl container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.480086 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-mlcwl" podUID="c20b0775-ba72-4379-b5df-2ff35ffc2704" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.492257 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:49 crc kubenswrapper[4789]: E1124 11:32:49.492939 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:32:49.992926498 +0000 UTC m=+152.575397877 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q52tc" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.494273 4789 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-24T11:32:49.363097062Z","Handler":null,"Name":""} Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.503047 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dr4mx"] Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.509549 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dr4mx" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.513123 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.526563 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dr4mx"] Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.529033 4789 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.529086 4789 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.600770 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.600961 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p72x8\" (UniqueName: \"kubernetes.io/projected/f7958781-e60c-4503-9aaf-a28078212e87-kube-api-access-p72x8\") pod \"redhat-operators-dr4mx\" (UID: \"f7958781-e60c-4503-9aaf-a28078212e87\") " pod="openshift-marketplace/redhat-operators-dr4mx" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.601053 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7958781-e60c-4503-9aaf-a28078212e87-catalog-content\") pod \"redhat-operators-dr4mx\" (UID: \"f7958781-e60c-4503-9aaf-a28078212e87\") " pod="openshift-marketplace/redhat-operators-dr4mx" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.601159 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7958781-e60c-4503-9aaf-a28078212e87-utilities\") pod \"redhat-operators-dr4mx\" (UID: \"f7958781-e60c-4503-9aaf-a28078212e87\") " pod="openshift-marketplace/redhat-operators-dr4mx" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.638919 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"70a3f223797734814158e2fd98acc41fe1cf675e38da25e2d311b16a9733cd05"} Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.639858 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.644361 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7qw5"] Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.661008 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"94507ac92b8465edb5d4b8084a636edc53b4e1c6d0c9c67513c48cdae6ce8569"} Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.661696 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.665909 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4149d0c4-d229-42bf-a53b-e1800c70946a","Type":"ContainerStarted","Data":"fd0999faa6c5112545276fea90129947d54830d648adfceb7b8a67ca6bb46e48"} Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.701700 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.701779 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p72x8\" (UniqueName: \"kubernetes.io/projected/f7958781-e60c-4503-9aaf-a28078212e87-kube-api-access-p72x8\") pod \"redhat-operators-dr4mx\" (UID: \"f7958781-e60c-4503-9aaf-a28078212e87\") " pod="openshift-marketplace/redhat-operators-dr4mx" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.701831 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7958781-e60c-4503-9aaf-a28078212e87-catalog-content\") pod \"redhat-operators-dr4mx\" (UID: \"f7958781-e60c-4503-9aaf-a28078212e87\") " pod="openshift-marketplace/redhat-operators-dr4mx" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.701881 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7958781-e60c-4503-9aaf-a28078212e87-utilities\") pod \"redhat-operators-dr4mx\" (UID: \"f7958781-e60c-4503-9aaf-a28078212e87\") " pod="openshift-marketplace/redhat-operators-dr4mx" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.702266 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7958781-e60c-4503-9aaf-a28078212e87-utilities\") pod \"redhat-operators-dr4mx\" (UID: \"f7958781-e60c-4503-9aaf-a28078212e87\") " pod="openshift-marketplace/redhat-operators-dr4mx" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.702866 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7958781-e60c-4503-9aaf-a28078212e87-catalog-content\") pod \"redhat-operators-dr4mx\" (UID: \"f7958781-e60c-4503-9aaf-a28078212e87\") " pod="openshift-marketplace/redhat-operators-dr4mx" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.705992 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wkkmt" event={"ID":"2e152bba-2c0e-4f46-8bc9-279649243e6c","Type":"ContainerStarted","Data":"79daff266fbc44927dd1752aa5d19cc5367fc8ddf5d292501c666a4d5425bfd0"} Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.715753 4789 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.715786 4789 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.744713 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p72x8\" (UniqueName: \"kubernetes.io/projected/f7958781-e60c-4503-9aaf-a28078212e87-kube-api-access-p72x8\") pod \"redhat-operators-dr4mx\" (UID: \"f7958781-e60c-4503-9aaf-a28078212e87\") " pod="openshift-marketplace/redhat-operators-dr4mx" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.746310 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-wkkmt" podStartSLOduration=12.746291458 podStartE2EDuration="12.746291458s" podCreationTimestamp="2025-11-24 11:32:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:49.744252982 +0000 UTC m=+152.326724361" watchObservedRunningTime="2025-11-24 11:32:49.746291458 +0000 UTC m=+152.328762827" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.778780 4789 patch_prober.go:28] interesting pod/apiserver-76f77b778f-gtxzr container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 24 11:32:49 crc kubenswrapper[4789]: [+]log ok Nov 24 11:32:49 crc kubenswrapper[4789]: [+]etcd ok Nov 24 11:32:49 crc kubenswrapper[4789]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 24 11:32:49 crc kubenswrapper[4789]: [+]poststarthook/generic-apiserver-start-informers ok Nov 24 11:32:49 crc kubenswrapper[4789]: [+]poststarthook/max-in-flight-filter ok Nov 24 11:32:49 crc kubenswrapper[4789]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 24 11:32:49 crc kubenswrapper[4789]: [+]poststarthook/image.openshift.io-apiserver-caches ok Nov 24 11:32:49 crc kubenswrapper[4789]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Nov 24 11:32:49 crc kubenswrapper[4789]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Nov 24 11:32:49 crc kubenswrapper[4789]: [+]poststarthook/project.openshift.io-projectcache ok Nov 24 11:32:49 crc kubenswrapper[4789]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Nov 24 11:32:49 crc kubenswrapper[4789]: [+]poststarthook/openshift.io-startinformers ok Nov 24 11:32:49 crc kubenswrapper[4789]: [+]poststarthook/openshift.io-restmapperupdater ok Nov 24 11:32:49 crc kubenswrapper[4789]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 24 11:32:49 crc kubenswrapper[4789]: livez check failed Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.778842 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" podUID="22cf157e-ce67-43f4-bbaf-577720728887" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.788223 4789 generic.go:334] "Generic (PLEG): container finished" podID="f176cbf2-3781-402f-a415-7f4d25eea239" containerID="85ca0c03060eb554f130dfce325000d558f28ec42597d3964cb0a72c5930876f" exitCode=0 Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.788539 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4z9g4" event={"ID":"f176cbf2-3781-402f-a415-7f4d25eea239","Type":"ContainerDied","Data":"85ca0c03060eb554f130dfce325000d558f28ec42597d3964cb0a72c5930876f"} Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.788586 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4z9g4" event={"ID":"f176cbf2-3781-402f-a415-7f4d25eea239","Type":"ContainerStarted","Data":"ce128fc2acabf6f13b9cf10aa333e1d37931f5938521fdad76864b3b84d145c6"} Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.791376 4789 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.797567 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dsbrt" event={"ID":"d203f144-c8d5-46fb-8139-3af59a00c0c9","Type":"ContainerStarted","Data":"9645afc18acf4d3b0d8ac31a39d343c349e332c37914955602cc0bdf52d79a85"} Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.842699 4789 generic.go:334] "Generic (PLEG): container finished" podID="33ef3ee1-1338-4ca5-b290-ea83723c547e" containerID="bce30429f0622abc36c590a75290ff414c6740a6132911eef84810f640e59ad3" exitCode=0 Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.843548 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-plzxk" event={"ID":"33ef3ee1-1338-4ca5-b290-ea83723c547e","Type":"ContainerDied","Data":"bce30429f0622abc36c590a75290ff414c6740a6132911eef84810f640e59ad3"} Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.843582 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-plzxk" event={"ID":"33ef3ee1-1338-4ca5-b290-ea83723c547e","Type":"ContainerStarted","Data":"7288a03aed5d2f5c2f7bcf16314dee7f257a9b1eff6cc32e1fedecb5de4ebf80"} Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.846725 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vmm68"] Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.849939 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vmm68" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.856167 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6tlsz" event={"ID":"de46ba5d-4892-4797-bec0-edb2aadce87f","Type":"ContainerStarted","Data":"5cd99ba550c1374847bc71b6d928f41af8d88fa51e4967c3dd357a29e5056ba9"} Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.873158 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jdbnn" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.891713 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dr4mx" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.892125 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vmm68"] Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.893556 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q52tc\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.912387 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8764ee23-63c4-4186-966f-4e97189aa541-utilities\") pod \"redhat-operators-vmm68\" (UID: \"8764ee23-63c4-4186-966f-4e97189aa541\") " pod="openshift-marketplace/redhat-operators-vmm68" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.912572 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdf2w\" (UniqueName: \"kubernetes.io/projected/8764ee23-63c4-4186-966f-4e97189aa541-kube-api-access-kdf2w\") pod \"redhat-operators-vmm68\" (UID: \"8764ee23-63c4-4186-966f-4e97189aa541\") " pod="openshift-marketplace/redhat-operators-vmm68" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.915886 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8764ee23-63c4-4186-966f-4e97189aa541-catalog-content\") pod \"redhat-operators-vmm68\" (UID: \"8764ee23-63c4-4186-966f-4e97189aa541\") " pod="openshift-marketplace/redhat-operators-vmm68" Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.919344 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gz4q9"] Nov 24 11:32:49 crc kubenswrapper[4789]: I1124 11:32:49.952869 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:50 crc kubenswrapper[4789]: I1124 11:32:50.017780 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdf2w\" (UniqueName: \"kubernetes.io/projected/8764ee23-63c4-4186-966f-4e97189aa541-kube-api-access-kdf2w\") pod \"redhat-operators-vmm68\" (UID: \"8764ee23-63c4-4186-966f-4e97189aa541\") " pod="openshift-marketplace/redhat-operators-vmm68" Nov 24 11:32:50 crc kubenswrapper[4789]: I1124 11:32:50.017860 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8764ee23-63c4-4186-966f-4e97189aa541-catalog-content\") pod \"redhat-operators-vmm68\" (UID: \"8764ee23-63c4-4186-966f-4e97189aa541\") " pod="openshift-marketplace/redhat-operators-vmm68" Nov 24 11:32:50 crc kubenswrapper[4789]: I1124 11:32:50.017884 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8764ee23-63c4-4186-966f-4e97189aa541-utilities\") pod \"redhat-operators-vmm68\" (UID: \"8764ee23-63c4-4186-966f-4e97189aa541\") " pod="openshift-marketplace/redhat-operators-vmm68" Nov 24 11:32:50 crc kubenswrapper[4789]: I1124 11:32:50.018690 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8764ee23-63c4-4186-966f-4e97189aa541-utilities\") pod \"redhat-operators-vmm68\" (UID: \"8764ee23-63c4-4186-966f-4e97189aa541\") " pod="openshift-marketplace/redhat-operators-vmm68" Nov 24 11:32:50 crc kubenswrapper[4789]: I1124 11:32:50.020162 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8764ee23-63c4-4186-966f-4e97189aa541-catalog-content\") pod \"redhat-operators-vmm68\" (UID: \"8764ee23-63c4-4186-966f-4e97189aa541\") " pod="openshift-marketplace/redhat-operators-vmm68" Nov 24 11:32:50 crc kubenswrapper[4789]: I1124 11:32:50.045543 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdf2w\" (UniqueName: \"kubernetes.io/projected/8764ee23-63c4-4186-966f-4e97189aa541-kube-api-access-kdf2w\") pod \"redhat-operators-vmm68\" (UID: \"8764ee23-63c4-4186-966f-4e97189aa541\") " pod="openshift-marketplace/redhat-operators-vmm68" Nov 24 11:32:50 crc kubenswrapper[4789]: I1124 11:32:50.080191 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-h8dsm" Nov 24 11:32:50 crc kubenswrapper[4789]: I1124 11:32:50.084626 4789 patch_prober.go:28] interesting pod/router-default-5444994796-h8dsm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:32:50 crc kubenswrapper[4789]: [-]has-synced failed: reason withheld Nov 24 11:32:50 crc kubenswrapper[4789]: [+]process-running ok Nov 24 11:32:50 crc kubenswrapper[4789]: healthz check failed Nov 24 11:32:50 crc kubenswrapper[4789]: I1124 11:32:50.084708 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h8dsm" podUID="1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:32:50 crc kubenswrapper[4789]: I1124 11:32:50.164161 4789 patch_prober.go:28] interesting pod/machine-config-daemon-9czvn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:32:50 crc kubenswrapper[4789]: I1124 11:32:50.164207 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:32:50 crc kubenswrapper[4789]: I1124 11:32:50.199126 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 24 11:32:50 crc kubenswrapper[4789]: I1124 11:32:50.200108 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vmm68" Nov 24 11:32:50 crc kubenswrapper[4789]: I1124 11:32:50.262879 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-xf9qh" Nov 24 11:32:50 crc kubenswrapper[4789]: I1124 11:32:50.444734 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-q52tc"] Nov 24 11:32:50 crc kubenswrapper[4789]: I1124 11:32:50.588187 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dr4mx"] Nov 24 11:32:50 crc kubenswrapper[4789]: I1124 11:32:50.877716 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vmm68"] Nov 24 11:32:50 crc kubenswrapper[4789]: I1124 11:32:50.880428 4789 generic.go:334] "Generic (PLEG): container finished" podID="4149d0c4-d229-42bf-a53b-e1800c70946a" containerID="31d725a9b55dc58130a1fd61eb4e7f722480b69d6bdea8aa2d4132a0e1835305" exitCode=0 Nov 24 11:32:50 crc kubenswrapper[4789]: I1124 11:32:50.880552 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4149d0c4-d229-42bf-a53b-e1800c70946a","Type":"ContainerDied","Data":"31d725a9b55dc58130a1fd61eb4e7f722480b69d6bdea8aa2d4132a0e1835305"} Nov 24 11:32:50 crc kubenswrapper[4789]: I1124 11:32:50.886274 4789 generic.go:334] "Generic (PLEG): container finished" podID="d203f144-c8d5-46fb-8139-3af59a00c0c9" containerID="86eeffc8f12433bcb0adc62fcc3c45cb3aa017d633faf567391e0ba55e6c349b" exitCode=0 Nov 24 11:32:50 crc kubenswrapper[4789]: I1124 11:32:50.886402 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dsbrt" event={"ID":"d203f144-c8d5-46fb-8139-3af59a00c0c9","Type":"ContainerDied","Data":"86eeffc8f12433bcb0adc62fcc3c45cb3aa017d633faf567391e0ba55e6c349b"} Nov 24 11:32:50 crc kubenswrapper[4789]: I1124 11:32:50.898616 4789 generic.go:334] "Generic (PLEG): container finished" podID="de46ba5d-4892-4797-bec0-edb2aadce87f" containerID="778eb862fac4f3bca96c7ed9ff6594ec39f8b4ddfa1b0b30bc74d32077e81659" exitCode=0 Nov 24 11:32:50 crc kubenswrapper[4789]: I1124 11:32:50.898720 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6tlsz" event={"ID":"de46ba5d-4892-4797-bec0-edb2aadce87f","Type":"ContainerDied","Data":"778eb862fac4f3bca96c7ed9ff6594ec39f8b4ddfa1b0b30bc74d32077e81659"} Nov 24 11:32:50 crc kubenswrapper[4789]: W1124 11:32:50.899876 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8764ee23_63c4_4186_966f_4e97189aa541.slice/crio-3f1cd7eef4ce2ce1bef4d3ff482742a4c0210fc07e25926db14a0e0cafab7ad4 WatchSource:0}: Error finding container 3f1cd7eef4ce2ce1bef4d3ff482742a4c0210fc07e25926db14a0e0cafab7ad4: Status 404 returned error can't find the container with id 3f1cd7eef4ce2ce1bef4d3ff482742a4c0210fc07e25926db14a0e0cafab7ad4 Nov 24 11:32:50 crc kubenswrapper[4789]: I1124 11:32:50.909014 4789 generic.go:334] "Generic (PLEG): container finished" podID="f7958781-e60c-4503-9aaf-a28078212e87" containerID="0e8532e947182b2765036556b574252db6fd7420bc25bcaf1de4d6a3efd247df" exitCode=0 Nov 24 11:32:50 crc kubenswrapper[4789]: I1124 11:32:50.909072 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dr4mx" event={"ID":"f7958781-e60c-4503-9aaf-a28078212e87","Type":"ContainerDied","Data":"0e8532e947182b2765036556b574252db6fd7420bc25bcaf1de4d6a3efd247df"} Nov 24 11:32:50 crc kubenswrapper[4789]: I1124 11:32:50.909093 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dr4mx" event={"ID":"f7958781-e60c-4503-9aaf-a28078212e87","Type":"ContainerStarted","Data":"e47b33da9cd2f776fbeca59879c418740749f80bd18a3e6a293d443b7ce8fada"} Nov 24 11:32:50 crc kubenswrapper[4789]: I1124 11:32:50.922628 4789 generic.go:334] "Generic (PLEG): container finished" podID="46fd6317-7fed-4725-9afd-18ea159e25d2" containerID="7925e002264bf58d083862f8ddf6ef71ce83fecd35687daf03f143f02f13b99e" exitCode=0 Nov 24 11:32:50 crc kubenswrapper[4789]: I1124 11:32:50.922710 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gz4q9" event={"ID":"46fd6317-7fed-4725-9afd-18ea159e25d2","Type":"ContainerDied","Data":"7925e002264bf58d083862f8ddf6ef71ce83fecd35687daf03f143f02f13b99e"} Nov 24 11:32:50 crc kubenswrapper[4789]: I1124 11:32:50.922736 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gz4q9" event={"ID":"46fd6317-7fed-4725-9afd-18ea159e25d2","Type":"ContainerStarted","Data":"18f170350c2c5ec35f788479af3c101bca55f7fe4d8d03e4186d956b9d18594f"} Nov 24 11:32:50 crc kubenswrapper[4789]: I1124 11:32:50.933090 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" event={"ID":"51c0ab73-bbc1-4f70-afa7-059dec256973","Type":"ContainerStarted","Data":"8275e9b3d5833f89a4c7d9b219a72d5a9521452da859d64476fc7801a87e1930"} Nov 24 11:32:50 crc kubenswrapper[4789]: I1124 11:32:50.933128 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" event={"ID":"51c0ab73-bbc1-4f70-afa7-059dec256973","Type":"ContainerStarted","Data":"243e6f5c0b626f134c9e06d401ccd56dba8c206cd2f1e2887444948da6496657"} Nov 24 11:32:50 crc kubenswrapper[4789]: I1124 11:32:50.933589 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:32:50 crc kubenswrapper[4789]: I1124 11:32:50.938447 4789 generic.go:334] "Generic (PLEG): container finished" podID="f6e57c00-016a-45da-8988-927342153596" containerID="d402807b27660c07bba28dcce8e08cb52d9c55d151868eb970a0306e2409aeab" exitCode=0 Nov 24 11:32:50 crc kubenswrapper[4789]: I1124 11:32:50.938853 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7qw5" event={"ID":"f6e57c00-016a-45da-8988-927342153596","Type":"ContainerDied","Data":"d402807b27660c07bba28dcce8e08cb52d9c55d151868eb970a0306e2409aeab"} Nov 24 11:32:50 crc kubenswrapper[4789]: I1124 11:32:50.938908 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7qw5" event={"ID":"f6e57c00-016a-45da-8988-927342153596","Type":"ContainerStarted","Data":"8415a60f117495bcfc19b2a40c9ec0ae74b129d573df0c2dcc03299d7b664e03"} Nov 24 11:32:51 crc kubenswrapper[4789]: I1124 11:32:51.013889 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" podStartSLOduration=128.013871748 podStartE2EDuration="2m8.013871748s" podCreationTimestamp="2025-11-24 11:30:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:51.012349696 +0000 UTC m=+153.594821075" watchObservedRunningTime="2025-11-24 11:32:51.013871748 +0000 UTC m=+153.596343127" Nov 24 11:32:51 crc kubenswrapper[4789]: I1124 11:32:51.070159 4789 patch_prober.go:28] interesting pod/router-default-5444994796-h8dsm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:32:51 crc kubenswrapper[4789]: [-]has-synced failed: reason withheld Nov 24 11:32:51 crc kubenswrapper[4789]: [+]process-running ok Nov 24 11:32:51 crc kubenswrapper[4789]: healthz check failed Nov 24 11:32:51 crc kubenswrapper[4789]: I1124 11:32:51.070210 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h8dsm" podUID="1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:32:52 crc kubenswrapper[4789]: I1124 11:32:52.049132 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-xt8qf" Nov 24 11:32:52 crc kubenswrapper[4789]: I1124 11:32:52.049381 4789 generic.go:334] "Generic (PLEG): container finished" podID="8764ee23-63c4-4186-966f-4e97189aa541" containerID="c5887ff66199617dbb502f10bc24d8ecec57c455583e30dfe0c5c5a35dac11ce" exitCode=0 Nov 24 11:32:52 crc kubenswrapper[4789]: I1124 11:32:52.050287 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmm68" event={"ID":"8764ee23-63c4-4186-966f-4e97189aa541","Type":"ContainerDied","Data":"c5887ff66199617dbb502f10bc24d8ecec57c455583e30dfe0c5c5a35dac11ce"} Nov 24 11:32:52 crc kubenswrapper[4789]: I1124 11:32:52.050313 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmm68" event={"ID":"8764ee23-63c4-4186-966f-4e97189aa541","Type":"ContainerStarted","Data":"3f1cd7eef4ce2ce1bef4d3ff482742a4c0210fc07e25926db14a0e0cafab7ad4"} Nov 24 11:32:52 crc kubenswrapper[4789]: I1124 11:32:52.075029 4789 patch_prober.go:28] interesting pod/router-default-5444994796-h8dsm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:32:52 crc kubenswrapper[4789]: [-]has-synced failed: reason withheld Nov 24 11:32:52 crc kubenswrapper[4789]: [+]process-running ok Nov 24 11:32:52 crc kubenswrapper[4789]: healthz check failed Nov 24 11:32:52 crc kubenswrapper[4789]: I1124 11:32:52.075060 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h8dsm" podUID="1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:32:52 crc kubenswrapper[4789]: I1124 11:32:52.459647 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 24 11:32:52 crc kubenswrapper[4789]: I1124 11:32:52.460554 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 11:32:52 crc kubenswrapper[4789]: I1124 11:32:52.466530 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 24 11:32:52 crc kubenswrapper[4789]: I1124 11:32:52.466745 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 24 11:32:52 crc kubenswrapper[4789]: I1124 11:32:52.484149 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 24 11:32:52 crc kubenswrapper[4789]: I1124 11:32:52.523986 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca038654-9fdf-4a95-ba82-420060b252c8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ca038654-9fdf-4a95-ba82-420060b252c8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 11:32:52 crc kubenswrapper[4789]: I1124 11:32:52.524060 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ca038654-9fdf-4a95-ba82-420060b252c8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ca038654-9fdf-4a95-ba82-420060b252c8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 11:32:52 crc kubenswrapper[4789]: I1124 11:32:52.625603 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca038654-9fdf-4a95-ba82-420060b252c8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ca038654-9fdf-4a95-ba82-420060b252c8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 11:32:52 crc kubenswrapper[4789]: I1124 11:32:52.625691 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ca038654-9fdf-4a95-ba82-420060b252c8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ca038654-9fdf-4a95-ba82-420060b252c8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 11:32:52 crc kubenswrapper[4789]: I1124 11:32:52.625783 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ca038654-9fdf-4a95-ba82-420060b252c8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ca038654-9fdf-4a95-ba82-420060b252c8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 11:32:52 crc kubenswrapper[4789]: I1124 11:32:52.656517 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca038654-9fdf-4a95-ba82-420060b252c8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ca038654-9fdf-4a95-ba82-420060b252c8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 11:32:52 crc kubenswrapper[4789]: I1124 11:32:52.737163 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 11:32:52 crc kubenswrapper[4789]: I1124 11:32:52.822303 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 11:32:52 crc kubenswrapper[4789]: I1124 11:32:52.827756 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4149d0c4-d229-42bf-a53b-e1800c70946a-kubelet-dir\") pod \"4149d0c4-d229-42bf-a53b-e1800c70946a\" (UID: \"4149d0c4-d229-42bf-a53b-e1800c70946a\") " Nov 24 11:32:52 crc kubenswrapper[4789]: I1124 11:32:52.827803 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4149d0c4-d229-42bf-a53b-e1800c70946a-kube-api-access\") pod \"4149d0c4-d229-42bf-a53b-e1800c70946a\" (UID: \"4149d0c4-d229-42bf-a53b-e1800c70946a\") " Nov 24 11:32:52 crc kubenswrapper[4789]: I1124 11:32:52.828389 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4149d0c4-d229-42bf-a53b-e1800c70946a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4149d0c4-d229-42bf-a53b-e1800c70946a" (UID: "4149d0c4-d229-42bf-a53b-e1800c70946a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:32:52 crc kubenswrapper[4789]: I1124 11:32:52.828777 4789 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4149d0c4-d229-42bf-a53b-e1800c70946a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 24 11:32:52 crc kubenswrapper[4789]: I1124 11:32:52.847844 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4149d0c4-d229-42bf-a53b-e1800c70946a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4149d0c4-d229-42bf-a53b-e1800c70946a" (UID: "4149d0c4-d229-42bf-a53b-e1800c70946a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:32:52 crc kubenswrapper[4789]: I1124 11:32:52.929717 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4149d0c4-d229-42bf-a53b-e1800c70946a-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 11:32:53 crc kubenswrapper[4789]: I1124 11:32:53.069966 4789 patch_prober.go:28] interesting pod/router-default-5444994796-h8dsm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:32:53 crc kubenswrapper[4789]: [-]has-synced failed: reason withheld Nov 24 11:32:53 crc kubenswrapper[4789]: [+]process-running ok Nov 24 11:32:53 crc kubenswrapper[4789]: healthz check failed Nov 24 11:32:53 crc kubenswrapper[4789]: I1124 11:32:53.070009 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h8dsm" podUID="1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:32:53 crc kubenswrapper[4789]: I1124 11:32:53.079033 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 11:32:53 crc kubenswrapper[4789]: I1124 11:32:53.080502 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4149d0c4-d229-42bf-a53b-e1800c70946a","Type":"ContainerDied","Data":"fd0999faa6c5112545276fea90129947d54830d648adfceb7b8a67ca6bb46e48"} Nov 24 11:32:53 crc kubenswrapper[4789]: I1124 11:32:53.080534 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd0999faa6c5112545276fea90129947d54830d648adfceb7b8a67ca6bb46e48" Nov 24 11:32:53 crc kubenswrapper[4789]: I1124 11:32:53.550164 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 24 11:32:54 crc kubenswrapper[4789]: I1124 11:32:54.070681 4789 patch_prober.go:28] interesting pod/router-default-5444994796-h8dsm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:32:54 crc kubenswrapper[4789]: [-]has-synced failed: reason withheld Nov 24 11:32:54 crc kubenswrapper[4789]: [+]process-running ok Nov 24 11:32:54 crc kubenswrapper[4789]: healthz check failed Nov 24 11:32:54 crc kubenswrapper[4789]: I1124 11:32:54.070731 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h8dsm" podUID="1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:32:54 crc kubenswrapper[4789]: I1124 11:32:54.096431 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ca038654-9fdf-4a95-ba82-420060b252c8","Type":"ContainerStarted","Data":"1dfb92b1eb906899fec1347ece79f761f42450a1ab2ecf17eec4f59346d0e1dc"} Nov 24 11:32:54 crc kubenswrapper[4789]: I1124 11:32:54.296784 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" Nov 24 11:32:54 crc kubenswrapper[4789]: I1124 11:32:54.300796 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-gtxzr" Nov 24 11:32:55 crc kubenswrapper[4789]: I1124 11:32:55.083294 4789 patch_prober.go:28] interesting pod/router-default-5444994796-h8dsm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:32:55 crc kubenswrapper[4789]: [-]has-synced failed: reason withheld Nov 24 11:32:55 crc kubenswrapper[4789]: [+]process-running ok Nov 24 11:32:55 crc kubenswrapper[4789]: healthz check failed Nov 24 11:32:55 crc kubenswrapper[4789]: I1124 11:32:55.084192 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h8dsm" podUID="1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:32:55 crc kubenswrapper[4789]: I1124 11:32:55.138814 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ca038654-9fdf-4a95-ba82-420060b252c8","Type":"ContainerStarted","Data":"290be554eeec9ab6cac3671a95f06d849fb146ef81812b107e715c2645564bce"} Nov 24 11:32:55 crc kubenswrapper[4789]: I1124 11:32:55.179597 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.179582975 podStartE2EDuration="3.179582975s" podCreationTimestamp="2025-11-24 11:32:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:55.177565239 +0000 UTC m=+157.760036618" watchObservedRunningTime="2025-11-24 11:32:55.179582975 +0000 UTC m=+157.762054354" Nov 24 11:32:56 crc kubenswrapper[4789]: I1124 11:32:56.069917 4789 patch_prober.go:28] interesting pod/router-default-5444994796-h8dsm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:32:56 crc kubenswrapper[4789]: [-]has-synced failed: reason withheld Nov 24 11:32:56 crc kubenswrapper[4789]: [+]process-running ok Nov 24 11:32:56 crc kubenswrapper[4789]: healthz check failed Nov 24 11:32:56 crc kubenswrapper[4789]: I1124 11:32:56.070290 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h8dsm" podUID="1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:32:56 crc kubenswrapper[4789]: I1124 11:32:56.163687 4789 generic.go:334] "Generic (PLEG): container finished" podID="ca038654-9fdf-4a95-ba82-420060b252c8" containerID="290be554eeec9ab6cac3671a95f06d849fb146ef81812b107e715c2645564bce" exitCode=0 Nov 24 11:32:56 crc kubenswrapper[4789]: I1124 11:32:56.163841 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ca038654-9fdf-4a95-ba82-420060b252c8","Type":"ContainerDied","Data":"290be554eeec9ab6cac3671a95f06d849fb146ef81812b107e715c2645564bce"} Nov 24 11:32:57 crc kubenswrapper[4789]: I1124 11:32:57.068545 4789 patch_prober.go:28] interesting pod/router-default-5444994796-h8dsm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:32:57 crc kubenswrapper[4789]: [-]has-synced failed: reason withheld Nov 24 11:32:57 crc kubenswrapper[4789]: [+]process-running ok Nov 24 11:32:57 crc kubenswrapper[4789]: healthz check failed Nov 24 11:32:57 crc kubenswrapper[4789]: I1124 11:32:57.068610 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h8dsm" podUID="1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:32:57 crc kubenswrapper[4789]: I1124 11:32:57.586342 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 11:32:57 crc kubenswrapper[4789]: I1124 11:32:57.601725 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ca038654-9fdf-4a95-ba82-420060b252c8-kubelet-dir\") pod \"ca038654-9fdf-4a95-ba82-420060b252c8\" (UID: \"ca038654-9fdf-4a95-ba82-420060b252c8\") " Nov 24 11:32:57 crc kubenswrapper[4789]: I1124 11:32:57.601794 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca038654-9fdf-4a95-ba82-420060b252c8-kube-api-access\") pod \"ca038654-9fdf-4a95-ba82-420060b252c8\" (UID: \"ca038654-9fdf-4a95-ba82-420060b252c8\") " Nov 24 11:32:57 crc kubenswrapper[4789]: I1124 11:32:57.601833 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca038654-9fdf-4a95-ba82-420060b252c8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ca038654-9fdf-4a95-ba82-420060b252c8" (UID: "ca038654-9fdf-4a95-ba82-420060b252c8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:32:57 crc kubenswrapper[4789]: I1124 11:32:57.602014 4789 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ca038654-9fdf-4a95-ba82-420060b252c8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 24 11:32:57 crc kubenswrapper[4789]: I1124 11:32:57.625649 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca038654-9fdf-4a95-ba82-420060b252c8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ca038654-9fdf-4a95-ba82-420060b252c8" (UID: "ca038654-9fdf-4a95-ba82-420060b252c8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:32:57 crc kubenswrapper[4789]: I1124 11:32:57.705697 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca038654-9fdf-4a95-ba82-420060b252c8-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 11:32:58 crc kubenswrapper[4789]: I1124 11:32:58.078317 4789 patch_prober.go:28] interesting pod/router-default-5444994796-h8dsm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:32:58 crc kubenswrapper[4789]: [-]has-synced failed: reason withheld Nov 24 11:32:58 crc kubenswrapper[4789]: [+]process-running ok Nov 24 11:32:58 crc kubenswrapper[4789]: healthz check failed Nov 24 11:32:58 crc kubenswrapper[4789]: I1124 11:32:58.078398 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h8dsm" podUID="1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:32:58 crc kubenswrapper[4789]: I1124 11:32:58.205027 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ca038654-9fdf-4a95-ba82-420060b252c8","Type":"ContainerDied","Data":"1dfb92b1eb906899fec1347ece79f761f42450a1ab2ecf17eec4f59346d0e1dc"} Nov 24 11:32:58 crc kubenswrapper[4789]: I1124 11:32:58.205068 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1dfb92b1eb906899fec1347ece79f761f42450a1ab2ecf17eec4f59346d0e1dc" Nov 24 11:32:58 crc kubenswrapper[4789]: I1124 11:32:58.205124 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 11:32:59 crc kubenswrapper[4789]: I1124 11:32:59.069355 4789 patch_prober.go:28] interesting pod/router-default-5444994796-h8dsm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:32:59 crc kubenswrapper[4789]: [-]has-synced failed: reason withheld Nov 24 11:32:59 crc kubenswrapper[4789]: [+]process-running ok Nov 24 11:32:59 crc kubenswrapper[4789]: healthz check failed Nov 24 11:32:59 crc kubenswrapper[4789]: I1124 11:32:59.069419 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h8dsm" podUID="1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:32:59 crc kubenswrapper[4789]: I1124 11:32:59.396738 4789 patch_prober.go:28] interesting pod/console-f9d7485db-ljwn7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Nov 24 11:32:59 crc kubenswrapper[4789]: I1124 11:32:59.397208 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-ljwn7" podUID="c9a07607-7a0f-4436-a3bc-9bd2cbf61663" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Nov 24 11:32:59 crc kubenswrapper[4789]: I1124 11:32:59.417897 4789 patch_prober.go:28] interesting pod/downloads-7954f5f757-mlcwl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 24 11:32:59 crc kubenswrapper[4789]: I1124 11:32:59.417945 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mlcwl" podUID="c20b0775-ba72-4379-b5df-2ff35ffc2704" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 24 11:32:59 crc kubenswrapper[4789]: I1124 11:32:59.417897 4789 patch_prober.go:28] interesting pod/downloads-7954f5f757-mlcwl container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 24 11:32:59 crc kubenswrapper[4789]: I1124 11:32:59.418020 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-mlcwl" podUID="c20b0775-ba72-4379-b5df-2ff35ffc2704" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 24 11:33:00 crc kubenswrapper[4789]: I1124 11:33:00.070770 4789 patch_prober.go:28] interesting pod/router-default-5444994796-h8dsm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:33:00 crc kubenswrapper[4789]: [-]has-synced failed: reason withheld Nov 24 11:33:00 crc kubenswrapper[4789]: [+]process-running ok Nov 24 11:33:00 crc kubenswrapper[4789]: healthz check failed Nov 24 11:33:00 crc kubenswrapper[4789]: I1124 11:33:00.071175 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h8dsm" podUID="1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:33:01 crc kubenswrapper[4789]: I1124 11:33:01.069585 4789 patch_prober.go:28] interesting pod/router-default-5444994796-h8dsm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:33:01 crc kubenswrapper[4789]: [+]has-synced ok Nov 24 11:33:01 crc kubenswrapper[4789]: [+]process-running ok Nov 24 11:33:01 crc kubenswrapper[4789]: healthz check failed Nov 24 11:33:01 crc kubenswrapper[4789]: I1124 11:33:01.069642 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h8dsm" podUID="1eb9a1b5-8f0a-426b-a7fe-8e71487c6a7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:33:02 crc kubenswrapper[4789]: I1124 11:33:02.070709 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-h8dsm" Nov 24 11:33:02 crc kubenswrapper[4789]: I1124 11:33:02.073503 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-h8dsm" Nov 24 11:33:06 crc kubenswrapper[4789]: I1124 11:33:06.443382 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1033d5e6-680c-4193-aade-8c3d801b0e3f-metrics-certs\") pod \"network-metrics-daemon-s69rz\" (UID: \"1033d5e6-680c-4193-aade-8c3d801b0e3f\") " pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:33:06 crc kubenswrapper[4789]: I1124 11:33:06.452689 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1033d5e6-680c-4193-aade-8c3d801b0e3f-metrics-certs\") pod \"network-metrics-daemon-s69rz\" (UID: \"1033d5e6-680c-4193-aade-8c3d801b0e3f\") " pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:33:06 crc kubenswrapper[4789]: I1124 11:33:06.611999 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-s69rz" Nov 24 11:33:09 crc kubenswrapper[4789]: I1124 11:33:09.402350 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-ljwn7" Nov 24 11:33:09 crc kubenswrapper[4789]: I1124 11:33:09.408145 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-ljwn7" Nov 24 11:33:09 crc kubenswrapper[4789]: I1124 11:33:09.428656 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-mlcwl" Nov 24 11:33:09 crc kubenswrapper[4789]: I1124 11:33:09.958209 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:33:17 crc kubenswrapper[4789]: E1124 11:33:17.005116 4789 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 24 11:33:17 crc kubenswrapper[4789]: E1124 11:33:17.005610 4789 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9vcjw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6tlsz_openshift-marketplace(de46ba5d-4892-4797-bec0-edb2aadce87f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 11:33:17 crc kubenswrapper[4789]: E1124 11:33:17.006776 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-6tlsz" podUID="de46ba5d-4892-4797-bec0-edb2aadce87f" Nov 24 11:33:17 crc kubenswrapper[4789]: E1124 11:33:17.048052 4789 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 24 11:33:17 crc kubenswrapper[4789]: E1124 11:33:17.048225 4789 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qvh4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-4z9g4_openshift-marketplace(f176cbf2-3781-402f-a415-7f4d25eea239): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 11:33:17 crc kubenswrapper[4789]: E1124 11:33:17.049414 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-4z9g4" podUID="f176cbf2-3781-402f-a415-7f4d25eea239" Nov 24 11:33:20 crc kubenswrapper[4789]: I1124 11:33:20.162441 4789 patch_prober.go:28] interesting pod/machine-config-daemon-9czvn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:33:20 crc kubenswrapper[4789]: I1124 11:33:20.162938 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:33:20 crc kubenswrapper[4789]: E1124 11:33:20.196834 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6tlsz" podUID="de46ba5d-4892-4797-bec0-edb2aadce87f" Nov 24 11:33:20 crc kubenswrapper[4789]: E1124 11:33:20.197076 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-4z9g4" podUID="f176cbf2-3781-402f-a415-7f4d25eea239" Nov 24 11:33:20 crc kubenswrapper[4789]: E1124 11:33:20.284603 4789 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 24 11:33:20 crc kubenswrapper[4789]: E1124 11:33:20.284713 4789 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p72x8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-dr4mx_openshift-marketplace(f7958781-e60c-4503-9aaf-a28078212e87): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 11:33:20 crc kubenswrapper[4789]: E1124 11:33:20.285945 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-dr4mx" podUID="f7958781-e60c-4503-9aaf-a28078212e87" Nov 24 11:33:20 crc kubenswrapper[4789]: E1124 11:33:20.320511 4789 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 24 11:33:20 crc kubenswrapper[4789]: E1124 11:33:20.320810 4789 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pvlp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-gz4q9_openshift-marketplace(46fd6317-7fed-4725-9afd-18ea159e25d2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 11:33:20 crc kubenswrapper[4789]: E1124 11:33:20.322179 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-gz4q9" podUID="46fd6317-7fed-4725-9afd-18ea159e25d2" Nov 24 11:33:20 crc kubenswrapper[4789]: I1124 11:33:20.323050 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pcnqw" Nov 24 11:33:20 crc kubenswrapper[4789]: E1124 11:33:20.338095 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-gz4q9" podUID="46fd6317-7fed-4725-9afd-18ea159e25d2" Nov 24 11:33:20 crc kubenswrapper[4789]: E1124 11:33:20.340725 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-dr4mx" podUID="f7958781-e60c-4503-9aaf-a28078212e87" Nov 24 11:33:20 crc kubenswrapper[4789]: E1124 11:33:20.415811 4789 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 24 11:33:20 crc kubenswrapper[4789]: E1124 11:33:20.415964 4789 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wmpn5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-dsbrt_openshift-marketplace(d203f144-c8d5-46fb-8139-3af59a00c0c9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 11:33:20 crc kubenswrapper[4789]: E1124 11:33:20.417814 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-dsbrt" podUID="d203f144-c8d5-46fb-8139-3af59a00c0c9" Nov 24 11:33:20 crc kubenswrapper[4789]: E1124 11:33:20.423117 4789 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 24 11:33:20 crc kubenswrapper[4789]: E1124 11:33:20.423237 4789 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kdf2w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-vmm68_openshift-marketplace(8764ee23-63c4-4186-966f-4e97189aa541): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 11:33:20 crc kubenswrapper[4789]: E1124 11:33:20.425962 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-vmm68" podUID="8764ee23-63c4-4186-966f-4e97189aa541" Nov 24 11:33:20 crc kubenswrapper[4789]: I1124 11:33:20.671943 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-s69rz"] Nov 24 11:33:20 crc kubenswrapper[4789]: W1124 11:33:20.680804 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1033d5e6_680c_4193_aade_8c3d801b0e3f.slice/crio-25fe61210bf8f01096e3ed9591a04e05c494714bda62b2f3f7e0ae2ad8eecfda WatchSource:0}: Error finding container 25fe61210bf8f01096e3ed9591a04e05c494714bda62b2f3f7e0ae2ad8eecfda: Status 404 returned error can't find the container with id 25fe61210bf8f01096e3ed9591a04e05c494714bda62b2f3f7e0ae2ad8eecfda Nov 24 11:33:21 crc kubenswrapper[4789]: I1124 11:33:21.339796 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-s69rz" event={"ID":"1033d5e6-680c-4193-aade-8c3d801b0e3f","Type":"ContainerStarted","Data":"aec82537ac3999941662cb71bd03f3d940b1b8ef7b97e32f4e8b0dc73046f6e7"} Nov 24 11:33:21 crc kubenswrapper[4789]: I1124 11:33:21.340047 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-s69rz" event={"ID":"1033d5e6-680c-4193-aade-8c3d801b0e3f","Type":"ContainerStarted","Data":"323c2406d3dcc64c08dcb12f316559693a385b7ad5b307f3ced0e3fdf9dcfda5"} Nov 24 11:33:21 crc kubenswrapper[4789]: I1124 11:33:21.340058 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-s69rz" event={"ID":"1033d5e6-680c-4193-aade-8c3d801b0e3f","Type":"ContainerStarted","Data":"25fe61210bf8f01096e3ed9591a04e05c494714bda62b2f3f7e0ae2ad8eecfda"} Nov 24 11:33:21 crc kubenswrapper[4789]: I1124 11:33:21.341350 4789 generic.go:334] "Generic (PLEG): container finished" podID="33ef3ee1-1338-4ca5-b290-ea83723c547e" containerID="8bbaeeb1a2f65e0a1bb5adefcb2d84d576a57d151ee51fa8a48026e5c0d67e31" exitCode=0 Nov 24 11:33:21 crc kubenswrapper[4789]: I1124 11:33:21.341425 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-plzxk" event={"ID":"33ef3ee1-1338-4ca5-b290-ea83723c547e","Type":"ContainerDied","Data":"8bbaeeb1a2f65e0a1bb5adefcb2d84d576a57d151ee51fa8a48026e5c0d67e31"} Nov 24 11:33:21 crc kubenswrapper[4789]: I1124 11:33:21.347867 4789 generic.go:334] "Generic (PLEG): container finished" podID="f6e57c00-016a-45da-8988-927342153596" containerID="b4b3e1d01217f002a3668f3a85ee13ef810ee3cd8d812ae738ba8ccc4c0d4d3f" exitCode=0 Nov 24 11:33:21 crc kubenswrapper[4789]: I1124 11:33:21.347962 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7qw5" event={"ID":"f6e57c00-016a-45da-8988-927342153596","Type":"ContainerDied","Data":"b4b3e1d01217f002a3668f3a85ee13ef810ee3cd8d812ae738ba8ccc4c0d4d3f"} Nov 24 11:33:21 crc kubenswrapper[4789]: E1124 11:33:21.351549 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-vmm68" podUID="8764ee23-63c4-4186-966f-4e97189aa541" Nov 24 11:33:21 crc kubenswrapper[4789]: E1124 11:33:21.351658 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-dsbrt" podUID="d203f144-c8d5-46fb-8139-3af59a00c0c9" Nov 24 11:33:21 crc kubenswrapper[4789]: I1124 11:33:21.374743 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-s69rz" podStartSLOduration=158.374719842 podStartE2EDuration="2m38.374719842s" podCreationTimestamp="2025-11-24 11:30:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:33:21.360838852 +0000 UTC m=+183.943310291" watchObservedRunningTime="2025-11-24 11:33:21.374719842 +0000 UTC m=+183.957191221" Nov 24 11:33:22 crc kubenswrapper[4789]: I1124 11:33:22.357221 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7qw5" event={"ID":"f6e57c00-016a-45da-8988-927342153596","Type":"ContainerStarted","Data":"181779097f1f88b67bc74d3061c14a5380963f15ed1f8c5216829a19e9679c78"} Nov 24 11:33:22 crc kubenswrapper[4789]: I1124 11:33:22.361941 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-plzxk" event={"ID":"33ef3ee1-1338-4ca5-b290-ea83723c547e","Type":"ContainerStarted","Data":"9a98be62f1c83cade79bacad34bca857474fff5602398c05a7b93cb2009b1eb5"} Nov 24 11:33:22 crc kubenswrapper[4789]: I1124 11:33:22.378727 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k7qw5" podStartSLOduration=3.316649113 podStartE2EDuration="34.378711523s" podCreationTimestamp="2025-11-24 11:32:48 +0000 UTC" firstStartedPulling="2025-11-24 11:32:50.94158454 +0000 UTC m=+153.524055919" lastFinishedPulling="2025-11-24 11:33:22.00364695 +0000 UTC m=+184.586118329" observedRunningTime="2025-11-24 11:33:22.376780373 +0000 UTC m=+184.959251772" watchObservedRunningTime="2025-11-24 11:33:22.378711523 +0000 UTC m=+184.961182902" Nov 24 11:33:22 crc kubenswrapper[4789]: I1124 11:33:22.395107 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-plzxk" podStartSLOduration=4.355593753 podStartE2EDuration="36.395092267s" podCreationTimestamp="2025-11-24 11:32:46 +0000 UTC" firstStartedPulling="2025-11-24 11:32:49.844888602 +0000 UTC m=+152.427359981" lastFinishedPulling="2025-11-24 11:33:21.884387116 +0000 UTC m=+184.466858495" observedRunningTime="2025-11-24 11:33:22.391607666 +0000 UTC m=+184.974079055" watchObservedRunningTime="2025-11-24 11:33:22.395092267 +0000 UTC m=+184.977563646" Nov 24 11:33:26 crc kubenswrapper[4789]: I1124 11:33:26.211157 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:33:26 crc kubenswrapper[4789]: I1124 11:33:26.603045 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-plzxk" Nov 24 11:33:26 crc kubenswrapper[4789]: I1124 11:33:26.603086 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-plzxk" Nov 24 11:33:26 crc kubenswrapper[4789]: I1124 11:33:26.748568 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-plzxk" Nov 24 11:33:27 crc kubenswrapper[4789]: I1124 11:33:27.436684 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-plzxk" Nov 24 11:33:27 crc kubenswrapper[4789]: I1124 11:33:27.999286 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-bp2hb"] Nov 24 11:33:28 crc kubenswrapper[4789]: I1124 11:33:28.839512 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k7qw5" Nov 24 11:33:28 crc kubenswrapper[4789]: I1124 11:33:28.839922 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k7qw5" Nov 24 11:33:28 crc kubenswrapper[4789]: I1124 11:33:28.881279 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k7qw5" Nov 24 11:33:29 crc kubenswrapper[4789]: I1124 11:33:29.439351 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k7qw5" Nov 24 11:33:34 crc kubenswrapper[4789]: I1124 11:33:34.438206 4789 generic.go:334] "Generic (PLEG): container finished" podID="f176cbf2-3781-402f-a415-7f4d25eea239" containerID="7473370993d7ea1bdb30ab36323967a8aeb4a3ee0ecafb2ef6a0e6eb7011b764" exitCode=0 Nov 24 11:33:34 crc kubenswrapper[4789]: I1124 11:33:34.438297 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4z9g4" event={"ID":"f176cbf2-3781-402f-a415-7f4d25eea239","Type":"ContainerDied","Data":"7473370993d7ea1bdb30ab36323967a8aeb4a3ee0ecafb2ef6a0e6eb7011b764"} Nov 24 11:33:34 crc kubenswrapper[4789]: I1124 11:33:34.440691 4789 generic.go:334] "Generic (PLEG): container finished" podID="d203f144-c8d5-46fb-8139-3af59a00c0c9" containerID="bb084ea9def935c003785c64e4cbf90eafc756a406f8a809ad6895eae373153b" exitCode=0 Nov 24 11:33:34 crc kubenswrapper[4789]: I1124 11:33:34.440762 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dsbrt" event={"ID":"d203f144-c8d5-46fb-8139-3af59a00c0c9","Type":"ContainerDied","Data":"bb084ea9def935c003785c64e4cbf90eafc756a406f8a809ad6895eae373153b"} Nov 24 11:33:34 crc kubenswrapper[4789]: I1124 11:33:34.443640 4789 generic.go:334] "Generic (PLEG): container finished" podID="de46ba5d-4892-4797-bec0-edb2aadce87f" containerID="6dca67c7b50ab2cd1703a094bc65368754021936cd8fb6b4938d0dda848922c4" exitCode=0 Nov 24 11:33:34 crc kubenswrapper[4789]: I1124 11:33:34.443683 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6tlsz" event={"ID":"de46ba5d-4892-4797-bec0-edb2aadce87f","Type":"ContainerDied","Data":"6dca67c7b50ab2cd1703a094bc65368754021936cd8fb6b4938d0dda848922c4"} Nov 24 11:33:34 crc kubenswrapper[4789]: I1124 11:33:34.447249 4789 generic.go:334] "Generic (PLEG): container finished" podID="8764ee23-63c4-4186-966f-4e97189aa541" containerID="bd833ab1bac283cdac268637fcfb62e21ff2c1cf6c39dce48564b50960f8237a" exitCode=0 Nov 24 11:33:34 crc kubenswrapper[4789]: I1124 11:33:34.447275 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmm68" event={"ID":"8764ee23-63c4-4186-966f-4e97189aa541","Type":"ContainerDied","Data":"bd833ab1bac283cdac268637fcfb62e21ff2c1cf6c39dce48564b50960f8237a"} Nov 24 11:33:35 crc kubenswrapper[4789]: I1124 11:33:35.453713 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4z9g4" event={"ID":"f176cbf2-3781-402f-a415-7f4d25eea239","Type":"ContainerStarted","Data":"08854396c123b49278c0f30ec750a729c7797c9933b32220e8dda0a509f1c865"} Nov 24 11:33:35 crc kubenswrapper[4789]: I1124 11:33:35.456229 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dsbrt" event={"ID":"d203f144-c8d5-46fb-8139-3af59a00c0c9","Type":"ContainerStarted","Data":"ee30ff09dfedd6b8b8b82cbf9c23de74171068f746a6766316803e1dcc2a2297"} Nov 24 11:33:35 crc kubenswrapper[4789]: I1124 11:33:35.458790 4789 generic.go:334] "Generic (PLEG): container finished" podID="46fd6317-7fed-4725-9afd-18ea159e25d2" containerID="8d90387f8c872d9b267815a8eb933313b3ef3b1d078fcc5034305e76be72cbb5" exitCode=0 Nov 24 11:33:35 crc kubenswrapper[4789]: I1124 11:33:35.458812 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gz4q9" event={"ID":"46fd6317-7fed-4725-9afd-18ea159e25d2","Type":"ContainerDied","Data":"8d90387f8c872d9b267815a8eb933313b3ef3b1d078fcc5034305e76be72cbb5"} Nov 24 11:33:35 crc kubenswrapper[4789]: I1124 11:33:35.483197 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4z9g4" podStartSLOduration=4.454850365 podStartE2EDuration="49.483174597s" podCreationTimestamp="2025-11-24 11:32:46 +0000 UTC" firstStartedPulling="2025-11-24 11:32:49.791067845 +0000 UTC m=+152.373539224" lastFinishedPulling="2025-11-24 11:33:34.819392077 +0000 UTC m=+197.401863456" observedRunningTime="2025-11-24 11:33:35.472282796 +0000 UTC m=+198.054754185" watchObservedRunningTime="2025-11-24 11:33:35.483174597 +0000 UTC m=+198.065645986" Nov 24 11:33:35 crc kubenswrapper[4789]: I1124 11:33:35.525370 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dsbrt" podStartSLOduration=5.54450914 podStartE2EDuration="49.525354318s" podCreationTimestamp="2025-11-24 11:32:46 +0000 UTC" firstStartedPulling="2025-11-24 11:32:50.898540041 +0000 UTC m=+153.481011420" lastFinishedPulling="2025-11-24 11:33:34.879385219 +0000 UTC m=+197.461856598" observedRunningTime="2025-11-24 11:33:35.507317583 +0000 UTC m=+198.089788962" watchObservedRunningTime="2025-11-24 11:33:35.525354318 +0000 UTC m=+198.107825697" Nov 24 11:33:36 crc kubenswrapper[4789]: I1124 11:33:36.463921 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6tlsz" event={"ID":"de46ba5d-4892-4797-bec0-edb2aadce87f","Type":"ContainerStarted","Data":"b7657f6e14c4dec121b091739f66fe48632da23c161888660d6de3684122daaa"} Nov 24 11:33:36 crc kubenswrapper[4789]: I1124 11:33:36.466018 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmm68" event={"ID":"8764ee23-63c4-4186-966f-4e97189aa541","Type":"ContainerStarted","Data":"be06d4d0a4cc43f6c7fd8d65345bac97e09662749d05cc0e9f3df746b68ba02f"} Nov 24 11:33:36 crc kubenswrapper[4789]: I1124 11:33:36.467876 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gz4q9" event={"ID":"46fd6317-7fed-4725-9afd-18ea159e25d2","Type":"ContainerStarted","Data":"e056848aed82e444df55ea2c5607ab7c1fbf5a63347bc5f9d3a45a86745cc1fa"} Nov 24 11:33:36 crc kubenswrapper[4789]: I1124 11:33:36.508240 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6tlsz" podStartSLOduration=4.620394155 podStartE2EDuration="50.508223183s" podCreationTimestamp="2025-11-24 11:32:46 +0000 UTC" firstStartedPulling="2025-11-24 11:32:49.863911097 +0000 UTC m=+152.446382476" lastFinishedPulling="2025-11-24 11:33:35.751740125 +0000 UTC m=+198.334211504" observedRunningTime="2025-11-24 11:33:36.486630435 +0000 UTC m=+199.069101814" watchObservedRunningTime="2025-11-24 11:33:36.508223183 +0000 UTC m=+199.090694562" Nov 24 11:33:36 crc kubenswrapper[4789]: I1124 11:33:36.531224 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vmm68" podStartSLOduration=3.995163605 podStartE2EDuration="47.531209938s" podCreationTimestamp="2025-11-24 11:32:49 +0000 UTC" firstStartedPulling="2025-11-24 11:32:52.074717645 +0000 UTC m=+154.657189024" lastFinishedPulling="2025-11-24 11:33:35.610763978 +0000 UTC m=+198.193235357" observedRunningTime="2025-11-24 11:33:36.512250737 +0000 UTC m=+199.094722106" watchObservedRunningTime="2025-11-24 11:33:36.531209938 +0000 UTC m=+199.113681317" Nov 24 11:33:36 crc kubenswrapper[4789]: I1124 11:33:36.795104 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6tlsz" Nov 24 11:33:36 crc kubenswrapper[4789]: I1124 11:33:36.795150 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6tlsz" Nov 24 11:33:37 crc kubenswrapper[4789]: I1124 11:33:37.093865 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4z9g4" Nov 24 11:33:37 crc kubenswrapper[4789]: I1124 11:33:37.093912 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4z9g4" Nov 24 11:33:37 crc kubenswrapper[4789]: I1124 11:33:37.275963 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dsbrt" Nov 24 11:33:37 crc kubenswrapper[4789]: I1124 11:33:37.276008 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dsbrt" Nov 24 11:33:37 crc kubenswrapper[4789]: I1124 11:33:37.317846 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dsbrt" Nov 24 11:33:37 crc kubenswrapper[4789]: I1124 11:33:37.362236 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gz4q9" podStartSLOduration=4.430650799 podStartE2EDuration="49.362215124s" podCreationTimestamp="2025-11-24 11:32:48 +0000 UTC" firstStartedPulling="2025-11-24 11:32:50.941260362 +0000 UTC m=+153.523731731" lastFinishedPulling="2025-11-24 11:33:35.872824677 +0000 UTC m=+198.455296056" observedRunningTime="2025-11-24 11:33:36.531485885 +0000 UTC m=+199.113957264" watchObservedRunningTime="2025-11-24 11:33:37.362215124 +0000 UTC m=+199.944686503" Nov 24 11:33:37 crc kubenswrapper[4789]: I1124 11:33:37.475127 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dr4mx" event={"ID":"f7958781-e60c-4503-9aaf-a28078212e87","Type":"ContainerStarted","Data":"4b5692ebcc366096235ee172f5b27f7682545a0a96787b60201c15ca4dc0da2f"} Nov 24 11:33:37 crc kubenswrapper[4789]: I1124 11:33:37.831708 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-6tlsz" podUID="de46ba5d-4892-4797-bec0-edb2aadce87f" containerName="registry-server" probeResult="failure" output=< Nov 24 11:33:37 crc kubenswrapper[4789]: timeout: failed to connect service ":50051" within 1s Nov 24 11:33:37 crc kubenswrapper[4789]: > Nov 24 11:33:38 crc kubenswrapper[4789]: I1124 11:33:38.130683 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-4z9g4" podUID="f176cbf2-3781-402f-a415-7f4d25eea239" containerName="registry-server" probeResult="failure" output=< Nov 24 11:33:38 crc kubenswrapper[4789]: timeout: failed to connect service ":50051" within 1s Nov 24 11:33:38 crc kubenswrapper[4789]: > Nov 24 11:33:38 crc kubenswrapper[4789]: I1124 11:33:38.480867 4789 generic.go:334] "Generic (PLEG): container finished" podID="f7958781-e60c-4503-9aaf-a28078212e87" containerID="4b5692ebcc366096235ee172f5b27f7682545a0a96787b60201c15ca4dc0da2f" exitCode=0 Nov 24 11:33:38 crc kubenswrapper[4789]: I1124 11:33:38.480903 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dr4mx" event={"ID":"f7958781-e60c-4503-9aaf-a28078212e87","Type":"ContainerDied","Data":"4b5692ebcc366096235ee172f5b27f7682545a0a96787b60201c15ca4dc0da2f"} Nov 24 11:33:39 crc kubenswrapper[4789]: I1124 11:33:39.302638 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gz4q9" Nov 24 11:33:39 crc kubenswrapper[4789]: I1124 11:33:39.302917 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gz4q9" Nov 24 11:33:39 crc kubenswrapper[4789]: I1124 11:33:39.365247 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gz4q9" Nov 24 11:33:39 crc kubenswrapper[4789]: I1124 11:33:39.492307 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dr4mx" event={"ID":"f7958781-e60c-4503-9aaf-a28078212e87","Type":"ContainerStarted","Data":"1cd454db2d29ec81815d17d93b6931de653070e6d21ad8be613a283a609fef0e"} Nov 24 11:33:39 crc kubenswrapper[4789]: I1124 11:33:39.892419 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dr4mx" Nov 24 11:33:39 crc kubenswrapper[4789]: I1124 11:33:39.892501 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dr4mx" Nov 24 11:33:40 crc kubenswrapper[4789]: I1124 11:33:40.201014 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vmm68" Nov 24 11:33:40 crc kubenswrapper[4789]: I1124 11:33:40.201063 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vmm68" Nov 24 11:33:40 crc kubenswrapper[4789]: I1124 11:33:40.940131 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dr4mx" podUID="f7958781-e60c-4503-9aaf-a28078212e87" containerName="registry-server" probeResult="failure" output=< Nov 24 11:33:40 crc kubenswrapper[4789]: timeout: failed to connect service ":50051" within 1s Nov 24 11:33:40 crc kubenswrapper[4789]: > Nov 24 11:33:41 crc kubenswrapper[4789]: I1124 11:33:41.249355 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vmm68" podUID="8764ee23-63c4-4186-966f-4e97189aa541" containerName="registry-server" probeResult="failure" output=< Nov 24 11:33:41 crc kubenswrapper[4789]: timeout: failed to connect service ":50051" within 1s Nov 24 11:33:41 crc kubenswrapper[4789]: > Nov 24 11:33:46 crc kubenswrapper[4789]: I1124 11:33:46.836770 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6tlsz" Nov 24 11:33:46 crc kubenswrapper[4789]: I1124 11:33:46.855812 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dr4mx" podStartSLOduration=9.928187389 podStartE2EDuration="57.855790734s" podCreationTimestamp="2025-11-24 11:32:49 +0000 UTC" firstStartedPulling="2025-11-24 11:32:50.913878095 +0000 UTC m=+153.496349474" lastFinishedPulling="2025-11-24 11:33:38.84148145 +0000 UTC m=+201.423952819" observedRunningTime="2025-11-24 11:33:39.520643559 +0000 UTC m=+202.103114938" watchObservedRunningTime="2025-11-24 11:33:46.855790734 +0000 UTC m=+209.438262113" Nov 24 11:33:46 crc kubenswrapper[4789]: I1124 11:33:46.884833 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6tlsz" Nov 24 11:33:47 crc kubenswrapper[4789]: I1124 11:33:47.130601 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4z9g4" Nov 24 11:33:47 crc kubenswrapper[4789]: I1124 11:33:47.173432 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4z9g4" Nov 24 11:33:47 crc kubenswrapper[4789]: I1124 11:33:47.320015 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dsbrt" Nov 24 11:33:48 crc kubenswrapper[4789]: I1124 11:33:48.556668 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dsbrt"] Nov 24 11:33:48 crc kubenswrapper[4789]: I1124 11:33:48.556924 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dsbrt" podUID="d203f144-c8d5-46fb-8139-3af59a00c0c9" containerName="registry-server" containerID="cri-o://ee30ff09dfedd6b8b8b82cbf9c23de74171068f746a6766316803e1dcc2a2297" gracePeriod=2 Nov 24 11:33:49 crc kubenswrapper[4789]: I1124 11:33:49.369260 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gz4q9" Nov 24 11:33:49 crc kubenswrapper[4789]: I1124 11:33:49.522684 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dsbrt" Nov 24 11:33:49 crc kubenswrapper[4789]: I1124 11:33:49.548839 4789 generic.go:334] "Generic (PLEG): container finished" podID="d203f144-c8d5-46fb-8139-3af59a00c0c9" containerID="ee30ff09dfedd6b8b8b82cbf9c23de74171068f746a6766316803e1dcc2a2297" exitCode=0 Nov 24 11:33:49 crc kubenswrapper[4789]: I1124 11:33:49.548919 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dsbrt" event={"ID":"d203f144-c8d5-46fb-8139-3af59a00c0c9","Type":"ContainerDied","Data":"ee30ff09dfedd6b8b8b82cbf9c23de74171068f746a6766316803e1dcc2a2297"} Nov 24 11:33:49 crc kubenswrapper[4789]: I1124 11:33:49.548951 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dsbrt" event={"ID":"d203f144-c8d5-46fb-8139-3af59a00c0c9","Type":"ContainerDied","Data":"9645afc18acf4d3b0d8ac31a39d343c349e332c37914955602cc0bdf52d79a85"} Nov 24 11:33:49 crc kubenswrapper[4789]: I1124 11:33:49.548971 4789 scope.go:117] "RemoveContainer" containerID="ee30ff09dfedd6b8b8b82cbf9c23de74171068f746a6766316803e1dcc2a2297" Nov 24 11:33:49 crc kubenswrapper[4789]: I1124 11:33:49.549275 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dsbrt" Nov 24 11:33:49 crc kubenswrapper[4789]: I1124 11:33:49.563502 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4z9g4"] Nov 24 11:33:49 crc kubenswrapper[4789]: I1124 11:33:49.563905 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4z9g4" podUID="f176cbf2-3781-402f-a415-7f4d25eea239" containerName="registry-server" containerID="cri-o://08854396c123b49278c0f30ec750a729c7797c9933b32220e8dda0a509f1c865" gracePeriod=2 Nov 24 11:33:49 crc kubenswrapper[4789]: I1124 11:33:49.574077 4789 scope.go:117] "RemoveContainer" containerID="bb084ea9def935c003785c64e4cbf90eafc756a406f8a809ad6895eae373153b" Nov 24 11:33:49 crc kubenswrapper[4789]: I1124 11:33:49.592114 4789 scope.go:117] "RemoveContainer" containerID="86eeffc8f12433bcb0adc62fcc3c45cb3aa017d633faf567391e0ba55e6c349b" Nov 24 11:33:49 crc kubenswrapper[4789]: I1124 11:33:49.606757 4789 scope.go:117] "RemoveContainer" containerID="ee30ff09dfedd6b8b8b82cbf9c23de74171068f746a6766316803e1dcc2a2297" Nov 24 11:33:49 crc kubenswrapper[4789]: E1124 11:33:49.608358 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee30ff09dfedd6b8b8b82cbf9c23de74171068f746a6766316803e1dcc2a2297\": container with ID starting with ee30ff09dfedd6b8b8b82cbf9c23de74171068f746a6766316803e1dcc2a2297 not found: ID does not exist" containerID="ee30ff09dfedd6b8b8b82cbf9c23de74171068f746a6766316803e1dcc2a2297" Nov 24 11:33:49 crc kubenswrapper[4789]: I1124 11:33:49.608414 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee30ff09dfedd6b8b8b82cbf9c23de74171068f746a6766316803e1dcc2a2297"} err="failed to get container status \"ee30ff09dfedd6b8b8b82cbf9c23de74171068f746a6766316803e1dcc2a2297\": rpc error: code = NotFound desc = could not find container \"ee30ff09dfedd6b8b8b82cbf9c23de74171068f746a6766316803e1dcc2a2297\": container with ID starting with ee30ff09dfedd6b8b8b82cbf9c23de74171068f746a6766316803e1dcc2a2297 not found: ID does not exist" Nov 24 11:33:49 crc kubenswrapper[4789]: I1124 11:33:49.608490 4789 scope.go:117] "RemoveContainer" containerID="bb084ea9def935c003785c64e4cbf90eafc756a406f8a809ad6895eae373153b" Nov 24 11:33:49 crc kubenswrapper[4789]: E1124 11:33:49.609521 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb084ea9def935c003785c64e4cbf90eafc756a406f8a809ad6895eae373153b\": container with ID starting with bb084ea9def935c003785c64e4cbf90eafc756a406f8a809ad6895eae373153b not found: ID does not exist" containerID="bb084ea9def935c003785c64e4cbf90eafc756a406f8a809ad6895eae373153b" Nov 24 11:33:49 crc kubenswrapper[4789]: I1124 11:33:49.609590 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb084ea9def935c003785c64e4cbf90eafc756a406f8a809ad6895eae373153b"} err="failed to get container status \"bb084ea9def935c003785c64e4cbf90eafc756a406f8a809ad6895eae373153b\": rpc error: code = NotFound desc = could not find container \"bb084ea9def935c003785c64e4cbf90eafc756a406f8a809ad6895eae373153b\": container with ID starting with bb084ea9def935c003785c64e4cbf90eafc756a406f8a809ad6895eae373153b not found: ID does not exist" Nov 24 11:33:49 crc kubenswrapper[4789]: I1124 11:33:49.609636 4789 scope.go:117] "RemoveContainer" containerID="86eeffc8f12433bcb0adc62fcc3c45cb3aa017d633faf567391e0ba55e6c349b" Nov 24 11:33:49 crc kubenswrapper[4789]: E1124 11:33:49.610025 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86eeffc8f12433bcb0adc62fcc3c45cb3aa017d633faf567391e0ba55e6c349b\": container with ID starting with 86eeffc8f12433bcb0adc62fcc3c45cb3aa017d633faf567391e0ba55e6c349b not found: ID does not exist" containerID="86eeffc8f12433bcb0adc62fcc3c45cb3aa017d633faf567391e0ba55e6c349b" Nov 24 11:33:49 crc kubenswrapper[4789]: I1124 11:33:49.610050 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86eeffc8f12433bcb0adc62fcc3c45cb3aa017d633faf567391e0ba55e6c349b"} err="failed to get container status \"86eeffc8f12433bcb0adc62fcc3c45cb3aa017d633faf567391e0ba55e6c349b\": rpc error: code = NotFound desc = could not find container \"86eeffc8f12433bcb0adc62fcc3c45cb3aa017d633faf567391e0ba55e6c349b\": container with ID starting with 86eeffc8f12433bcb0adc62fcc3c45cb3aa017d633faf567391e0ba55e6c349b not found: ID does not exist" Nov 24 11:33:49 crc kubenswrapper[4789]: I1124 11:33:49.620806 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d203f144-c8d5-46fb-8139-3af59a00c0c9-catalog-content\") pod \"d203f144-c8d5-46fb-8139-3af59a00c0c9\" (UID: \"d203f144-c8d5-46fb-8139-3af59a00c0c9\") " Nov 24 11:33:49 crc kubenswrapper[4789]: I1124 11:33:49.620890 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d203f144-c8d5-46fb-8139-3af59a00c0c9-utilities\") pod \"d203f144-c8d5-46fb-8139-3af59a00c0c9\" (UID: \"d203f144-c8d5-46fb-8139-3af59a00c0c9\") " Nov 24 11:33:49 crc kubenswrapper[4789]: I1124 11:33:49.620929 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmpn5\" (UniqueName: \"kubernetes.io/projected/d203f144-c8d5-46fb-8139-3af59a00c0c9-kube-api-access-wmpn5\") pod \"d203f144-c8d5-46fb-8139-3af59a00c0c9\" (UID: \"d203f144-c8d5-46fb-8139-3af59a00c0c9\") " Nov 24 11:33:49 crc kubenswrapper[4789]: I1124 11:33:49.621719 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d203f144-c8d5-46fb-8139-3af59a00c0c9-utilities" (OuterVolumeSpecName: "utilities") pod "d203f144-c8d5-46fb-8139-3af59a00c0c9" (UID: "d203f144-c8d5-46fb-8139-3af59a00c0c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:33:49 crc kubenswrapper[4789]: I1124 11:33:49.627999 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d203f144-c8d5-46fb-8139-3af59a00c0c9-kube-api-access-wmpn5" (OuterVolumeSpecName: "kube-api-access-wmpn5") pod "d203f144-c8d5-46fb-8139-3af59a00c0c9" (UID: "d203f144-c8d5-46fb-8139-3af59a00c0c9"). InnerVolumeSpecName "kube-api-access-wmpn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:33:49 crc kubenswrapper[4789]: I1124 11:33:49.660674 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d203f144-c8d5-46fb-8139-3af59a00c0c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d203f144-c8d5-46fb-8139-3af59a00c0c9" (UID: "d203f144-c8d5-46fb-8139-3af59a00c0c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:33:49 crc kubenswrapper[4789]: I1124 11:33:49.722114 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d203f144-c8d5-46fb-8139-3af59a00c0c9-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:33:49 crc kubenswrapper[4789]: I1124 11:33:49.722339 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmpn5\" (UniqueName: \"kubernetes.io/projected/d203f144-c8d5-46fb-8139-3af59a00c0c9-kube-api-access-wmpn5\") on node \"crc\" DevicePath \"\"" Nov 24 11:33:49 crc kubenswrapper[4789]: I1124 11:33:49.722436 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d203f144-c8d5-46fb-8139-3af59a00c0c9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:33:49 crc kubenswrapper[4789]: I1124 11:33:49.890023 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4z9g4" Nov 24 11:33:49 crc kubenswrapper[4789]: I1124 11:33:49.911254 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dsbrt"] Nov 24 11:33:49 crc kubenswrapper[4789]: I1124 11:33:49.915568 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dsbrt"] Nov 24 11:33:49 crc kubenswrapper[4789]: I1124 11:33:49.923862 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f176cbf2-3781-402f-a415-7f4d25eea239-catalog-content\") pod \"f176cbf2-3781-402f-a415-7f4d25eea239\" (UID: \"f176cbf2-3781-402f-a415-7f4d25eea239\") " Nov 24 11:33:49 crc kubenswrapper[4789]: I1124 11:33:49.923942 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvh4t\" (UniqueName: \"kubernetes.io/projected/f176cbf2-3781-402f-a415-7f4d25eea239-kube-api-access-qvh4t\") pod \"f176cbf2-3781-402f-a415-7f4d25eea239\" (UID: \"f176cbf2-3781-402f-a415-7f4d25eea239\") " Nov 24 11:33:49 crc kubenswrapper[4789]: I1124 11:33:49.924012 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f176cbf2-3781-402f-a415-7f4d25eea239-utilities\") pod \"f176cbf2-3781-402f-a415-7f4d25eea239\" (UID: \"f176cbf2-3781-402f-a415-7f4d25eea239\") " Nov 24 11:33:49 crc kubenswrapper[4789]: I1124 11:33:49.925015 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f176cbf2-3781-402f-a415-7f4d25eea239-utilities" (OuterVolumeSpecName: "utilities") pod "f176cbf2-3781-402f-a415-7f4d25eea239" (UID: "f176cbf2-3781-402f-a415-7f4d25eea239"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:33:49 crc kubenswrapper[4789]: I1124 11:33:49.991020 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f176cbf2-3781-402f-a415-7f4d25eea239-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f176cbf2-3781-402f-a415-7f4d25eea239" (UID: "f176cbf2-3781-402f-a415-7f4d25eea239"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:33:49 crc kubenswrapper[4789]: I1124 11:33:49.991814 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f176cbf2-3781-402f-a415-7f4d25eea239-kube-api-access-qvh4t" (OuterVolumeSpecName: "kube-api-access-qvh4t") pod "f176cbf2-3781-402f-a415-7f4d25eea239" (UID: "f176cbf2-3781-402f-a415-7f4d25eea239"). InnerVolumeSpecName "kube-api-access-qvh4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:33:50 crc kubenswrapper[4789]: I1124 11:33:50.011434 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dr4mx" Nov 24 11:33:50 crc kubenswrapper[4789]: I1124 11:33:50.029649 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvh4t\" (UniqueName: \"kubernetes.io/projected/f176cbf2-3781-402f-a415-7f4d25eea239-kube-api-access-qvh4t\") on node \"crc\" DevicePath \"\"" Nov 24 11:33:50 crc kubenswrapper[4789]: I1124 11:33:50.029684 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f176cbf2-3781-402f-a415-7f4d25eea239-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:33:50 crc kubenswrapper[4789]: I1124 11:33:50.029695 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f176cbf2-3781-402f-a415-7f4d25eea239-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:33:50 crc kubenswrapper[4789]: I1124 11:33:50.045368 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dr4mx" Nov 24 11:33:50 crc kubenswrapper[4789]: I1124 11:33:50.161944 4789 patch_prober.go:28] interesting pod/machine-config-daemon-9czvn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:33:50 crc kubenswrapper[4789]: I1124 11:33:50.162025 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:33:50 crc kubenswrapper[4789]: I1124 11:33:50.162078 4789 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" Nov 24 11:33:50 crc kubenswrapper[4789]: I1124 11:33:50.162736 4789 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250"} pod="openshift-machine-config-operator/machine-config-daemon-9czvn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 11:33:50 crc kubenswrapper[4789]: I1124 11:33:50.162801 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" containerID="cri-o://af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250" gracePeriod=600 Nov 24 11:33:50 crc kubenswrapper[4789]: I1124 11:33:50.178669 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d203f144-c8d5-46fb-8139-3af59a00c0c9" path="/var/lib/kubelet/pods/d203f144-c8d5-46fb-8139-3af59a00c0c9/volumes" Nov 24 11:33:50 crc kubenswrapper[4789]: I1124 11:33:50.243399 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vmm68" Nov 24 11:33:50 crc kubenswrapper[4789]: I1124 11:33:50.282216 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vmm68" Nov 24 11:33:50 crc kubenswrapper[4789]: I1124 11:33:50.558446 4789 generic.go:334] "Generic (PLEG): container finished" podID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerID="af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250" exitCode=0 Nov 24 11:33:50 crc kubenswrapper[4789]: I1124 11:33:50.558499 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" event={"ID":"30c4a832-f0e4-481b-a474-3ecea86049f6","Type":"ContainerDied","Data":"af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250"} Nov 24 11:33:50 crc kubenswrapper[4789]: I1124 11:33:50.559527 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" event={"ID":"30c4a832-f0e4-481b-a474-3ecea86049f6","Type":"ContainerStarted","Data":"d01d9f803d962ac5043375280873250a6cee3099fd94b66cca2fe0e05b74f3c0"} Nov 24 11:33:50 crc kubenswrapper[4789]: I1124 11:33:50.562272 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4z9g4" event={"ID":"f176cbf2-3781-402f-a415-7f4d25eea239","Type":"ContainerDied","Data":"08854396c123b49278c0f30ec750a729c7797c9933b32220e8dda0a509f1c865"} Nov 24 11:33:50 crc kubenswrapper[4789]: I1124 11:33:50.562330 4789 scope.go:117] "RemoveContainer" containerID="08854396c123b49278c0f30ec750a729c7797c9933b32220e8dda0a509f1c865" Nov 24 11:33:50 crc kubenswrapper[4789]: I1124 11:33:50.562283 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4z9g4" Nov 24 11:33:50 crc kubenswrapper[4789]: I1124 11:33:50.562599 4789 generic.go:334] "Generic (PLEG): container finished" podID="f176cbf2-3781-402f-a415-7f4d25eea239" containerID="08854396c123b49278c0f30ec750a729c7797c9933b32220e8dda0a509f1c865" exitCode=0 Nov 24 11:33:50 crc kubenswrapper[4789]: I1124 11:33:50.562672 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4z9g4" event={"ID":"f176cbf2-3781-402f-a415-7f4d25eea239","Type":"ContainerDied","Data":"ce128fc2acabf6f13b9cf10aa333e1d37931f5938521fdad76864b3b84d145c6"} Nov 24 11:33:50 crc kubenswrapper[4789]: I1124 11:33:50.596028 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4z9g4"] Nov 24 11:33:50 crc kubenswrapper[4789]: I1124 11:33:50.600308 4789 scope.go:117] "RemoveContainer" containerID="7473370993d7ea1bdb30ab36323967a8aeb4a3ee0ecafb2ef6a0e6eb7011b764" Nov 24 11:33:50 crc kubenswrapper[4789]: I1124 11:33:50.603567 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4z9g4"] Nov 24 11:33:50 crc kubenswrapper[4789]: I1124 11:33:50.614020 4789 scope.go:117] "RemoveContainer" containerID="85ca0c03060eb554f130dfce325000d558f28ec42597d3964cb0a72c5930876f" Nov 24 11:33:50 crc kubenswrapper[4789]: I1124 11:33:50.625573 4789 scope.go:117] "RemoveContainer" containerID="08854396c123b49278c0f30ec750a729c7797c9933b32220e8dda0a509f1c865" Nov 24 11:33:50 crc kubenswrapper[4789]: E1124 11:33:50.625914 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08854396c123b49278c0f30ec750a729c7797c9933b32220e8dda0a509f1c865\": container with ID starting with 08854396c123b49278c0f30ec750a729c7797c9933b32220e8dda0a509f1c865 not found: ID does not exist" containerID="08854396c123b49278c0f30ec750a729c7797c9933b32220e8dda0a509f1c865" Nov 24 11:33:50 crc kubenswrapper[4789]: I1124 11:33:50.625949 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08854396c123b49278c0f30ec750a729c7797c9933b32220e8dda0a509f1c865"} err="failed to get container status \"08854396c123b49278c0f30ec750a729c7797c9933b32220e8dda0a509f1c865\": rpc error: code = NotFound desc = could not find container \"08854396c123b49278c0f30ec750a729c7797c9933b32220e8dda0a509f1c865\": container with ID starting with 08854396c123b49278c0f30ec750a729c7797c9933b32220e8dda0a509f1c865 not found: ID does not exist" Nov 24 11:33:50 crc kubenswrapper[4789]: I1124 11:33:50.625968 4789 scope.go:117] "RemoveContainer" containerID="7473370993d7ea1bdb30ab36323967a8aeb4a3ee0ecafb2ef6a0e6eb7011b764" Nov 24 11:33:50 crc kubenswrapper[4789]: E1124 11:33:50.626203 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7473370993d7ea1bdb30ab36323967a8aeb4a3ee0ecafb2ef6a0e6eb7011b764\": container with ID starting with 7473370993d7ea1bdb30ab36323967a8aeb4a3ee0ecafb2ef6a0e6eb7011b764 not found: ID does not exist" containerID="7473370993d7ea1bdb30ab36323967a8aeb4a3ee0ecafb2ef6a0e6eb7011b764" Nov 24 11:33:50 crc kubenswrapper[4789]: I1124 11:33:50.626257 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7473370993d7ea1bdb30ab36323967a8aeb4a3ee0ecafb2ef6a0e6eb7011b764"} err="failed to get container status \"7473370993d7ea1bdb30ab36323967a8aeb4a3ee0ecafb2ef6a0e6eb7011b764\": rpc error: code = NotFound desc = could not find container \"7473370993d7ea1bdb30ab36323967a8aeb4a3ee0ecafb2ef6a0e6eb7011b764\": container with ID starting with 7473370993d7ea1bdb30ab36323967a8aeb4a3ee0ecafb2ef6a0e6eb7011b764 not found: ID does not exist" Nov 24 11:33:50 crc kubenswrapper[4789]: I1124 11:33:50.626271 4789 scope.go:117] "RemoveContainer" containerID="85ca0c03060eb554f130dfce325000d558f28ec42597d3964cb0a72c5930876f" Nov 24 11:33:50 crc kubenswrapper[4789]: E1124 11:33:50.626615 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85ca0c03060eb554f130dfce325000d558f28ec42597d3964cb0a72c5930876f\": container with ID starting with 85ca0c03060eb554f130dfce325000d558f28ec42597d3964cb0a72c5930876f not found: ID does not exist" containerID="85ca0c03060eb554f130dfce325000d558f28ec42597d3964cb0a72c5930876f" Nov 24 11:33:50 crc kubenswrapper[4789]: I1124 11:33:50.626648 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85ca0c03060eb554f130dfce325000d558f28ec42597d3964cb0a72c5930876f"} err="failed to get container status \"85ca0c03060eb554f130dfce325000d558f28ec42597d3964cb0a72c5930876f\": rpc error: code = NotFound desc = could not find container \"85ca0c03060eb554f130dfce325000d558f28ec42597d3964cb0a72c5930876f\": container with ID starting with 85ca0c03060eb554f130dfce325000d558f28ec42597d3964cb0a72c5930876f not found: ID does not exist" Nov 24 11:33:51 crc kubenswrapper[4789]: I1124 11:33:51.961293 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gz4q9"] Nov 24 11:33:51 crc kubenswrapper[4789]: I1124 11:33:51.961662 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gz4q9" podUID="46fd6317-7fed-4725-9afd-18ea159e25d2" containerName="registry-server" containerID="cri-o://e056848aed82e444df55ea2c5607ab7c1fbf5a63347bc5f9d3a45a86745cc1fa" gracePeriod=2 Nov 24 11:33:52 crc kubenswrapper[4789]: I1124 11:33:52.175510 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f176cbf2-3781-402f-a415-7f4d25eea239" path="/var/lib/kubelet/pods/f176cbf2-3781-402f-a415-7f4d25eea239/volumes" Nov 24 11:33:52 crc kubenswrapper[4789]: I1124 11:33:52.299026 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gz4q9" Nov 24 11:33:52 crc kubenswrapper[4789]: I1124 11:33:52.358835 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46fd6317-7fed-4725-9afd-18ea159e25d2-catalog-content\") pod \"46fd6317-7fed-4725-9afd-18ea159e25d2\" (UID: \"46fd6317-7fed-4725-9afd-18ea159e25d2\") " Nov 24 11:33:52 crc kubenswrapper[4789]: I1124 11:33:52.358922 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46fd6317-7fed-4725-9afd-18ea159e25d2-utilities\") pod \"46fd6317-7fed-4725-9afd-18ea159e25d2\" (UID: \"46fd6317-7fed-4725-9afd-18ea159e25d2\") " Nov 24 11:33:52 crc kubenswrapper[4789]: I1124 11:33:52.358953 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvlp7\" (UniqueName: \"kubernetes.io/projected/46fd6317-7fed-4725-9afd-18ea159e25d2-kube-api-access-pvlp7\") pod \"46fd6317-7fed-4725-9afd-18ea159e25d2\" (UID: \"46fd6317-7fed-4725-9afd-18ea159e25d2\") " Nov 24 11:33:52 crc kubenswrapper[4789]: I1124 11:33:52.360275 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46fd6317-7fed-4725-9afd-18ea159e25d2-utilities" (OuterVolumeSpecName: "utilities") pod "46fd6317-7fed-4725-9afd-18ea159e25d2" (UID: "46fd6317-7fed-4725-9afd-18ea159e25d2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:33:52 crc kubenswrapper[4789]: I1124 11:33:52.367639 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46fd6317-7fed-4725-9afd-18ea159e25d2-kube-api-access-pvlp7" (OuterVolumeSpecName: "kube-api-access-pvlp7") pod "46fd6317-7fed-4725-9afd-18ea159e25d2" (UID: "46fd6317-7fed-4725-9afd-18ea159e25d2"). InnerVolumeSpecName "kube-api-access-pvlp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:33:52 crc kubenswrapper[4789]: I1124 11:33:52.376189 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46fd6317-7fed-4725-9afd-18ea159e25d2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46fd6317-7fed-4725-9afd-18ea159e25d2" (UID: "46fd6317-7fed-4725-9afd-18ea159e25d2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:33:52 crc kubenswrapper[4789]: I1124 11:33:52.460012 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46fd6317-7fed-4725-9afd-18ea159e25d2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:33:52 crc kubenswrapper[4789]: I1124 11:33:52.460045 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46fd6317-7fed-4725-9afd-18ea159e25d2-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:33:52 crc kubenswrapper[4789]: I1124 11:33:52.460057 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvlp7\" (UniqueName: \"kubernetes.io/projected/46fd6317-7fed-4725-9afd-18ea159e25d2-kube-api-access-pvlp7\") on node \"crc\" DevicePath \"\"" Nov 24 11:33:52 crc kubenswrapper[4789]: I1124 11:33:52.575243 4789 generic.go:334] "Generic (PLEG): container finished" podID="46fd6317-7fed-4725-9afd-18ea159e25d2" containerID="e056848aed82e444df55ea2c5607ab7c1fbf5a63347bc5f9d3a45a86745cc1fa" exitCode=0 Nov 24 11:33:52 crc kubenswrapper[4789]: I1124 11:33:52.575285 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gz4q9" event={"ID":"46fd6317-7fed-4725-9afd-18ea159e25d2","Type":"ContainerDied","Data":"e056848aed82e444df55ea2c5607ab7c1fbf5a63347bc5f9d3a45a86745cc1fa"} Nov 24 11:33:52 crc kubenswrapper[4789]: I1124 11:33:52.575296 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gz4q9" Nov 24 11:33:52 crc kubenswrapper[4789]: I1124 11:33:52.575312 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gz4q9" event={"ID":"46fd6317-7fed-4725-9afd-18ea159e25d2","Type":"ContainerDied","Data":"18f170350c2c5ec35f788479af3c101bca55f7fe4d8d03e4186d956b9d18594f"} Nov 24 11:33:52 crc kubenswrapper[4789]: I1124 11:33:52.575329 4789 scope.go:117] "RemoveContainer" containerID="e056848aed82e444df55ea2c5607ab7c1fbf5a63347bc5f9d3a45a86745cc1fa" Nov 24 11:33:52 crc kubenswrapper[4789]: I1124 11:33:52.590115 4789 scope.go:117] "RemoveContainer" containerID="8d90387f8c872d9b267815a8eb933313b3ef3b1d078fcc5034305e76be72cbb5" Nov 24 11:33:52 crc kubenswrapper[4789]: I1124 11:33:52.611610 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gz4q9"] Nov 24 11:33:52 crc kubenswrapper[4789]: I1124 11:33:52.615139 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gz4q9"] Nov 24 11:33:52 crc kubenswrapper[4789]: I1124 11:33:52.632475 4789 scope.go:117] "RemoveContainer" containerID="7925e002264bf58d083862f8ddf6ef71ce83fecd35687daf03f143f02f13b99e" Nov 24 11:33:52 crc kubenswrapper[4789]: I1124 11:33:52.644119 4789 scope.go:117] "RemoveContainer" containerID="e056848aed82e444df55ea2c5607ab7c1fbf5a63347bc5f9d3a45a86745cc1fa" Nov 24 11:33:52 crc kubenswrapper[4789]: E1124 11:33:52.644735 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e056848aed82e444df55ea2c5607ab7c1fbf5a63347bc5f9d3a45a86745cc1fa\": container with ID starting with e056848aed82e444df55ea2c5607ab7c1fbf5a63347bc5f9d3a45a86745cc1fa not found: ID does not exist" containerID="e056848aed82e444df55ea2c5607ab7c1fbf5a63347bc5f9d3a45a86745cc1fa" Nov 24 11:33:52 crc kubenswrapper[4789]: I1124 11:33:52.644767 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e056848aed82e444df55ea2c5607ab7c1fbf5a63347bc5f9d3a45a86745cc1fa"} err="failed to get container status \"e056848aed82e444df55ea2c5607ab7c1fbf5a63347bc5f9d3a45a86745cc1fa\": rpc error: code = NotFound desc = could not find container \"e056848aed82e444df55ea2c5607ab7c1fbf5a63347bc5f9d3a45a86745cc1fa\": container with ID starting with e056848aed82e444df55ea2c5607ab7c1fbf5a63347bc5f9d3a45a86745cc1fa not found: ID does not exist" Nov 24 11:33:52 crc kubenswrapper[4789]: I1124 11:33:52.644792 4789 scope.go:117] "RemoveContainer" containerID="8d90387f8c872d9b267815a8eb933313b3ef3b1d078fcc5034305e76be72cbb5" Nov 24 11:33:52 crc kubenswrapper[4789]: E1124 11:33:52.645040 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d90387f8c872d9b267815a8eb933313b3ef3b1d078fcc5034305e76be72cbb5\": container with ID starting with 8d90387f8c872d9b267815a8eb933313b3ef3b1d078fcc5034305e76be72cbb5 not found: ID does not exist" containerID="8d90387f8c872d9b267815a8eb933313b3ef3b1d078fcc5034305e76be72cbb5" Nov 24 11:33:52 crc kubenswrapper[4789]: I1124 11:33:52.645066 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d90387f8c872d9b267815a8eb933313b3ef3b1d078fcc5034305e76be72cbb5"} err="failed to get container status \"8d90387f8c872d9b267815a8eb933313b3ef3b1d078fcc5034305e76be72cbb5\": rpc error: code = NotFound desc = could not find container \"8d90387f8c872d9b267815a8eb933313b3ef3b1d078fcc5034305e76be72cbb5\": container with ID starting with 8d90387f8c872d9b267815a8eb933313b3ef3b1d078fcc5034305e76be72cbb5 not found: ID does not exist" Nov 24 11:33:52 crc kubenswrapper[4789]: I1124 11:33:52.645082 4789 scope.go:117] "RemoveContainer" containerID="7925e002264bf58d083862f8ddf6ef71ce83fecd35687daf03f143f02f13b99e" Nov 24 11:33:52 crc kubenswrapper[4789]: E1124 11:33:52.645323 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7925e002264bf58d083862f8ddf6ef71ce83fecd35687daf03f143f02f13b99e\": container with ID starting with 7925e002264bf58d083862f8ddf6ef71ce83fecd35687daf03f143f02f13b99e not found: ID does not exist" containerID="7925e002264bf58d083862f8ddf6ef71ce83fecd35687daf03f143f02f13b99e" Nov 24 11:33:52 crc kubenswrapper[4789]: I1124 11:33:52.645349 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7925e002264bf58d083862f8ddf6ef71ce83fecd35687daf03f143f02f13b99e"} err="failed to get container status \"7925e002264bf58d083862f8ddf6ef71ce83fecd35687daf03f143f02f13b99e\": rpc error: code = NotFound desc = could not find container \"7925e002264bf58d083862f8ddf6ef71ce83fecd35687daf03f143f02f13b99e\": container with ID starting with 7925e002264bf58d083862f8ddf6ef71ce83fecd35687daf03f143f02f13b99e not found: ID does not exist" Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.023641 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" podUID="026c0fd3-78be-48ef-81cd-ba63abb9197d" containerName="oauth-openshift" containerID="cri-o://2e159fe72c22ea5dd47644beda978e9ac41ce9336a22775c950f9252e8e684b0" gracePeriod=15 Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.399993 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.471307 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-user-template-login\") pod \"026c0fd3-78be-48ef-81cd-ba63abb9197d\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.471362 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-system-session\") pod \"026c0fd3-78be-48ef-81cd-ba63abb9197d\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.471395 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-system-serving-cert\") pod \"026c0fd3-78be-48ef-81cd-ba63abb9197d\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.471434 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-system-trusted-ca-bundle\") pod \"026c0fd3-78be-48ef-81cd-ba63abb9197d\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.471477 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xb58z\" (UniqueName: \"kubernetes.io/projected/026c0fd3-78be-48ef-81cd-ba63abb9197d-kube-api-access-xb58z\") pod \"026c0fd3-78be-48ef-81cd-ba63abb9197d\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.471497 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-system-router-certs\") pod \"026c0fd3-78be-48ef-81cd-ba63abb9197d\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.471522 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-system-cliconfig\") pod \"026c0fd3-78be-48ef-81cd-ba63abb9197d\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.471539 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-system-service-ca\") pod \"026c0fd3-78be-48ef-81cd-ba63abb9197d\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.471562 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/026c0fd3-78be-48ef-81cd-ba63abb9197d-audit-dir\") pod \"026c0fd3-78be-48ef-81cd-ba63abb9197d\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.471586 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/026c0fd3-78be-48ef-81cd-ba63abb9197d-audit-policies\") pod \"026c0fd3-78be-48ef-81cd-ba63abb9197d\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.471603 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-user-idp-0-file-data\") pod \"026c0fd3-78be-48ef-81cd-ba63abb9197d\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.471617 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-user-template-provider-selection\") pod \"026c0fd3-78be-48ef-81cd-ba63abb9197d\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.471638 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-user-template-error\") pod \"026c0fd3-78be-48ef-81cd-ba63abb9197d\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.471666 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-system-ocp-branding-template\") pod \"026c0fd3-78be-48ef-81cd-ba63abb9197d\" (UID: \"026c0fd3-78be-48ef-81cd-ba63abb9197d\") " Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.472716 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/026c0fd3-78be-48ef-81cd-ba63abb9197d-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "026c0fd3-78be-48ef-81cd-ba63abb9197d" (UID: "026c0fd3-78be-48ef-81cd-ba63abb9197d"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.473212 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "026c0fd3-78be-48ef-81cd-ba63abb9197d" (UID: "026c0fd3-78be-48ef-81cd-ba63abb9197d"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.473399 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/026c0fd3-78be-48ef-81cd-ba63abb9197d-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "026c0fd3-78be-48ef-81cd-ba63abb9197d" (UID: "026c0fd3-78be-48ef-81cd-ba63abb9197d"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.473641 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "026c0fd3-78be-48ef-81cd-ba63abb9197d" (UID: "026c0fd3-78be-48ef-81cd-ba63abb9197d"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.473914 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "026c0fd3-78be-48ef-81cd-ba63abb9197d" (UID: "026c0fd3-78be-48ef-81cd-ba63abb9197d"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.476362 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "026c0fd3-78be-48ef-81cd-ba63abb9197d" (UID: "026c0fd3-78be-48ef-81cd-ba63abb9197d"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.476500 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "026c0fd3-78be-48ef-81cd-ba63abb9197d" (UID: "026c0fd3-78be-48ef-81cd-ba63abb9197d"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.476533 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/026c0fd3-78be-48ef-81cd-ba63abb9197d-kube-api-access-xb58z" (OuterVolumeSpecName: "kube-api-access-xb58z") pod "026c0fd3-78be-48ef-81cd-ba63abb9197d" (UID: "026c0fd3-78be-48ef-81cd-ba63abb9197d"). InnerVolumeSpecName "kube-api-access-xb58z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.476760 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "026c0fd3-78be-48ef-81cd-ba63abb9197d" (UID: "026c0fd3-78be-48ef-81cd-ba63abb9197d"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.477310 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "026c0fd3-78be-48ef-81cd-ba63abb9197d" (UID: "026c0fd3-78be-48ef-81cd-ba63abb9197d"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.477732 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "026c0fd3-78be-48ef-81cd-ba63abb9197d" (UID: "026c0fd3-78be-48ef-81cd-ba63abb9197d"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.478672 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "026c0fd3-78be-48ef-81cd-ba63abb9197d" (UID: "026c0fd3-78be-48ef-81cd-ba63abb9197d"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.478788 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "026c0fd3-78be-48ef-81cd-ba63abb9197d" (UID: "026c0fd3-78be-48ef-81cd-ba63abb9197d"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.479427 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "026c0fd3-78be-48ef-81cd-ba63abb9197d" (UID: "026c0fd3-78be-48ef-81cd-ba63abb9197d"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.572849 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xb58z\" (UniqueName: \"kubernetes.io/projected/026c0fd3-78be-48ef-81cd-ba63abb9197d-kube-api-access-xb58z\") on node \"crc\" DevicePath \"\"" Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.572896 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.572914 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.572926 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.572943 4789 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/026c0fd3-78be-48ef-81cd-ba63abb9197d-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.572957 4789 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/026c0fd3-78be-48ef-81cd-ba63abb9197d-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.572969 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.572982 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.572995 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.573008 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.573020 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.573032 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.573043 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.573056 4789 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/026c0fd3-78be-48ef-81cd-ba63abb9197d-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.581844 4789 generic.go:334] "Generic (PLEG): container finished" podID="026c0fd3-78be-48ef-81cd-ba63abb9197d" containerID="2e159fe72c22ea5dd47644beda978e9ac41ce9336a22775c950f9252e8e684b0" exitCode=0 Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.581891 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" event={"ID":"026c0fd3-78be-48ef-81cd-ba63abb9197d","Type":"ContainerDied","Data":"2e159fe72c22ea5dd47644beda978e9ac41ce9336a22775c950f9252e8e684b0"} Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.581918 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.581942 4789 scope.go:117] "RemoveContainer" containerID="2e159fe72c22ea5dd47644beda978e9ac41ce9336a22775c950f9252e8e684b0" Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.581929 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-bp2hb" event={"ID":"026c0fd3-78be-48ef-81cd-ba63abb9197d","Type":"ContainerDied","Data":"732a2d563642a1147bac1e1c0ba4ea67847607a15984fb93acc228693def7b9e"} Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.610576 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-bp2hb"] Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.613475 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-bp2hb"] Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.613695 4789 scope.go:117] "RemoveContainer" containerID="2e159fe72c22ea5dd47644beda978e9ac41ce9336a22775c950f9252e8e684b0" Nov 24 11:33:53 crc kubenswrapper[4789]: E1124 11:33:53.614648 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e159fe72c22ea5dd47644beda978e9ac41ce9336a22775c950f9252e8e684b0\": container with ID starting with 2e159fe72c22ea5dd47644beda978e9ac41ce9336a22775c950f9252e8e684b0 not found: ID does not exist" containerID="2e159fe72c22ea5dd47644beda978e9ac41ce9336a22775c950f9252e8e684b0" Nov 24 11:33:53 crc kubenswrapper[4789]: I1124 11:33:53.614822 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e159fe72c22ea5dd47644beda978e9ac41ce9336a22775c950f9252e8e684b0"} err="failed to get container status \"2e159fe72c22ea5dd47644beda978e9ac41ce9336a22775c950f9252e8e684b0\": rpc error: code = NotFound desc = could not find container \"2e159fe72c22ea5dd47644beda978e9ac41ce9336a22775c950f9252e8e684b0\": container with ID starting with 2e159fe72c22ea5dd47644beda978e9ac41ce9336a22775c950f9252e8e684b0 not found: ID does not exist" Nov 24 11:33:54 crc kubenswrapper[4789]: I1124 11:33:54.176734 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="026c0fd3-78be-48ef-81cd-ba63abb9197d" path="/var/lib/kubelet/pods/026c0fd3-78be-48ef-81cd-ba63abb9197d/volumes" Nov 24 11:33:54 crc kubenswrapper[4789]: I1124 11:33:54.177251 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46fd6317-7fed-4725-9afd-18ea159e25d2" path="/var/lib/kubelet/pods/46fd6317-7fed-4725-9afd-18ea159e25d2/volumes" Nov 24 11:33:54 crc kubenswrapper[4789]: I1124 11:33:54.355159 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vmm68"] Nov 24 11:33:54 crc kubenswrapper[4789]: I1124 11:33:54.355697 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vmm68" podUID="8764ee23-63c4-4186-966f-4e97189aa541" containerName="registry-server" containerID="cri-o://be06d4d0a4cc43f6c7fd8d65345bac97e09662749d05cc0e9f3df746b68ba02f" gracePeriod=2 Nov 24 11:33:54 crc kubenswrapper[4789]: I1124 11:33:54.592183 4789 generic.go:334] "Generic (PLEG): container finished" podID="8764ee23-63c4-4186-966f-4e97189aa541" containerID="be06d4d0a4cc43f6c7fd8d65345bac97e09662749d05cc0e9f3df746b68ba02f" exitCode=0 Nov 24 11:33:54 crc kubenswrapper[4789]: I1124 11:33:54.592282 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmm68" event={"ID":"8764ee23-63c4-4186-966f-4e97189aa541","Type":"ContainerDied","Data":"be06d4d0a4cc43f6c7fd8d65345bac97e09662749d05cc0e9f3df746b68ba02f"} Nov 24 11:33:54 crc kubenswrapper[4789]: I1124 11:33:54.726809 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vmm68" Nov 24 11:33:54 crc kubenswrapper[4789]: I1124 11:33:54.787165 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8764ee23-63c4-4186-966f-4e97189aa541-utilities\") pod \"8764ee23-63c4-4186-966f-4e97189aa541\" (UID: \"8764ee23-63c4-4186-966f-4e97189aa541\") " Nov 24 11:33:54 crc kubenswrapper[4789]: I1124 11:33:54.787301 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdf2w\" (UniqueName: \"kubernetes.io/projected/8764ee23-63c4-4186-966f-4e97189aa541-kube-api-access-kdf2w\") pod \"8764ee23-63c4-4186-966f-4e97189aa541\" (UID: \"8764ee23-63c4-4186-966f-4e97189aa541\") " Nov 24 11:33:54 crc kubenswrapper[4789]: I1124 11:33:54.787377 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8764ee23-63c4-4186-966f-4e97189aa541-catalog-content\") pod \"8764ee23-63c4-4186-966f-4e97189aa541\" (UID: \"8764ee23-63c4-4186-966f-4e97189aa541\") " Nov 24 11:33:54 crc kubenswrapper[4789]: I1124 11:33:54.789417 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8764ee23-63c4-4186-966f-4e97189aa541-utilities" (OuterVolumeSpecName: "utilities") pod "8764ee23-63c4-4186-966f-4e97189aa541" (UID: "8764ee23-63c4-4186-966f-4e97189aa541"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:33:54 crc kubenswrapper[4789]: I1124 11:33:54.793592 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8764ee23-63c4-4186-966f-4e97189aa541-kube-api-access-kdf2w" (OuterVolumeSpecName: "kube-api-access-kdf2w") pod "8764ee23-63c4-4186-966f-4e97189aa541" (UID: "8764ee23-63c4-4186-966f-4e97189aa541"). InnerVolumeSpecName "kube-api-access-kdf2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:33:54 crc kubenswrapper[4789]: I1124 11:33:54.874749 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8764ee23-63c4-4186-966f-4e97189aa541-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8764ee23-63c4-4186-966f-4e97189aa541" (UID: "8764ee23-63c4-4186-966f-4e97189aa541"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:33:54 crc kubenswrapper[4789]: I1124 11:33:54.889502 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdf2w\" (UniqueName: \"kubernetes.io/projected/8764ee23-63c4-4186-966f-4e97189aa541-kube-api-access-kdf2w\") on node \"crc\" DevicePath \"\"" Nov 24 11:33:54 crc kubenswrapper[4789]: I1124 11:33:54.889827 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8764ee23-63c4-4186-966f-4e97189aa541-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:33:54 crc kubenswrapper[4789]: I1124 11:33:54.889982 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8764ee23-63c4-4186-966f-4e97189aa541-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:33:55 crc kubenswrapper[4789]: I1124 11:33:55.644358 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmm68" event={"ID":"8764ee23-63c4-4186-966f-4e97189aa541","Type":"ContainerDied","Data":"3f1cd7eef4ce2ce1bef4d3ff482742a4c0210fc07e25926db14a0e0cafab7ad4"} Nov 24 11:33:55 crc kubenswrapper[4789]: I1124 11:33:55.644515 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vmm68" Nov 24 11:33:55 crc kubenswrapper[4789]: I1124 11:33:55.644743 4789 scope.go:117] "RemoveContainer" containerID="be06d4d0a4cc43f6c7fd8d65345bac97e09662749d05cc0e9f3df746b68ba02f" Nov 24 11:33:55 crc kubenswrapper[4789]: I1124 11:33:55.668679 4789 scope.go:117] "RemoveContainer" containerID="bd833ab1bac283cdac268637fcfb62e21ff2c1cf6c39dce48564b50960f8237a" Nov 24 11:33:55 crc kubenswrapper[4789]: I1124 11:33:55.671029 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vmm68"] Nov 24 11:33:55 crc kubenswrapper[4789]: I1124 11:33:55.673541 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vmm68"] Nov 24 11:33:55 crc kubenswrapper[4789]: I1124 11:33:55.690015 4789 scope.go:117] "RemoveContainer" containerID="c5887ff66199617dbb502f10bc24d8ecec57c455583e30dfe0c5c5a35dac11ce" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.182266 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8764ee23-63c4-4186-966f-4e97189aa541" path="/var/lib/kubelet/pods/8764ee23-63c4-4186-966f-4e97189aa541/volumes" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.260810 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-75494747d9-rk4z2"] Nov 24 11:33:56 crc kubenswrapper[4789]: E1124 11:33:56.261047 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d203f144-c8d5-46fb-8139-3af59a00c0c9" containerName="extract-utilities" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.261061 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="d203f144-c8d5-46fb-8139-3af59a00c0c9" containerName="extract-utilities" Nov 24 11:33:56 crc kubenswrapper[4789]: E1124 11:33:56.261074 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d203f144-c8d5-46fb-8139-3af59a00c0c9" containerName="extract-content" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.261083 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="d203f144-c8d5-46fb-8139-3af59a00c0c9" containerName="extract-content" Nov 24 11:33:56 crc kubenswrapper[4789]: E1124 11:33:56.261094 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46fd6317-7fed-4725-9afd-18ea159e25d2" containerName="registry-server" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.261103 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="46fd6317-7fed-4725-9afd-18ea159e25d2" containerName="registry-server" Nov 24 11:33:56 crc kubenswrapper[4789]: E1124 11:33:56.261114 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="026c0fd3-78be-48ef-81cd-ba63abb9197d" containerName="oauth-openshift" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.261122 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="026c0fd3-78be-48ef-81cd-ba63abb9197d" containerName="oauth-openshift" Nov 24 11:33:56 crc kubenswrapper[4789]: E1124 11:33:56.261136 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8764ee23-63c4-4186-966f-4e97189aa541" containerName="registry-server" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.261147 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="8764ee23-63c4-4186-966f-4e97189aa541" containerName="registry-server" Nov 24 11:33:56 crc kubenswrapper[4789]: E1124 11:33:56.261158 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4149d0c4-d229-42bf-a53b-e1800c70946a" containerName="pruner" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.261166 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="4149d0c4-d229-42bf-a53b-e1800c70946a" containerName="pruner" Nov 24 11:33:56 crc kubenswrapper[4789]: E1124 11:33:56.261176 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f176cbf2-3781-402f-a415-7f4d25eea239" containerName="registry-server" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.261184 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="f176cbf2-3781-402f-a415-7f4d25eea239" containerName="registry-server" Nov 24 11:33:56 crc kubenswrapper[4789]: E1124 11:33:56.261196 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f176cbf2-3781-402f-a415-7f4d25eea239" containerName="extract-utilities" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.261204 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="f176cbf2-3781-402f-a415-7f4d25eea239" containerName="extract-utilities" Nov 24 11:33:56 crc kubenswrapper[4789]: E1124 11:33:56.261215 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f176cbf2-3781-402f-a415-7f4d25eea239" containerName="extract-content" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.261223 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="f176cbf2-3781-402f-a415-7f4d25eea239" containerName="extract-content" Nov 24 11:33:56 crc kubenswrapper[4789]: E1124 11:33:56.261236 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca038654-9fdf-4a95-ba82-420060b252c8" containerName="pruner" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.261244 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca038654-9fdf-4a95-ba82-420060b252c8" containerName="pruner" Nov 24 11:33:56 crc kubenswrapper[4789]: E1124 11:33:56.261258 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8764ee23-63c4-4186-966f-4e97189aa541" containerName="extract-utilities" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.261265 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="8764ee23-63c4-4186-966f-4e97189aa541" containerName="extract-utilities" Nov 24 11:33:56 crc kubenswrapper[4789]: E1124 11:33:56.261278 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8764ee23-63c4-4186-966f-4e97189aa541" containerName="extract-content" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.261286 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="8764ee23-63c4-4186-966f-4e97189aa541" containerName="extract-content" Nov 24 11:33:56 crc kubenswrapper[4789]: E1124 11:33:56.261299 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46fd6317-7fed-4725-9afd-18ea159e25d2" containerName="extract-utilities" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.261308 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="46fd6317-7fed-4725-9afd-18ea159e25d2" containerName="extract-utilities" Nov 24 11:33:56 crc kubenswrapper[4789]: E1124 11:33:56.261318 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46fd6317-7fed-4725-9afd-18ea159e25d2" containerName="extract-content" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.261326 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="46fd6317-7fed-4725-9afd-18ea159e25d2" containerName="extract-content" Nov 24 11:33:56 crc kubenswrapper[4789]: E1124 11:33:56.261342 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d203f144-c8d5-46fb-8139-3af59a00c0c9" containerName="registry-server" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.261350 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="d203f144-c8d5-46fb-8139-3af59a00c0c9" containerName="registry-server" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.261460 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="f176cbf2-3781-402f-a415-7f4d25eea239" containerName="registry-server" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.261564 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="d203f144-c8d5-46fb-8139-3af59a00c0c9" containerName="registry-server" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.261577 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca038654-9fdf-4a95-ba82-420060b252c8" containerName="pruner" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.261589 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="026c0fd3-78be-48ef-81cd-ba63abb9197d" containerName="oauth-openshift" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.261604 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="46fd6317-7fed-4725-9afd-18ea159e25d2" containerName="registry-server" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.261614 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="4149d0c4-d229-42bf-a53b-e1800c70946a" containerName="pruner" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.261626 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="8764ee23-63c4-4186-966f-4e97189aa541" containerName="registry-server" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.262076 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.267024 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.267299 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.270049 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.270119 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.270161 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.270209 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.270592 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.270840 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.270927 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.270880 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.271821 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.273984 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.282566 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.286878 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-75494747d9-rk4z2"] Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.293310 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.295137 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.346443 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7b536b83-e0ed-4705-b376-3280c2657213-audit-dir\") pod \"oauth-openshift-75494747d9-rk4z2\" (UID: \"7b536b83-e0ed-4705-b376-3280c2657213\") " pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.346518 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7b536b83-e0ed-4705-b376-3280c2657213-v4-0-config-system-session\") pod \"oauth-openshift-75494747d9-rk4z2\" (UID: \"7b536b83-e0ed-4705-b376-3280c2657213\") " pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.346550 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7b536b83-e0ed-4705-b376-3280c2657213-audit-policies\") pod \"oauth-openshift-75494747d9-rk4z2\" (UID: \"7b536b83-e0ed-4705-b376-3280c2657213\") " pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.346595 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7b536b83-e0ed-4705-b376-3280c2657213-v4-0-config-system-cliconfig\") pod \"oauth-openshift-75494747d9-rk4z2\" (UID: \"7b536b83-e0ed-4705-b376-3280c2657213\") " pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.447882 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7b536b83-e0ed-4705-b376-3280c2657213-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-75494747d9-rk4z2\" (UID: \"7b536b83-e0ed-4705-b376-3280c2657213\") " pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.447941 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7b536b83-e0ed-4705-b376-3280c2657213-v4-0-config-system-router-certs\") pod \"oauth-openshift-75494747d9-rk4z2\" (UID: \"7b536b83-e0ed-4705-b376-3280c2657213\") " pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.447989 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7b536b83-e0ed-4705-b376-3280c2657213-audit-dir\") pod \"oauth-openshift-75494747d9-rk4z2\" (UID: \"7b536b83-e0ed-4705-b376-3280c2657213\") " pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.448019 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7b536b83-e0ed-4705-b376-3280c2657213-v4-0-config-system-session\") pod \"oauth-openshift-75494747d9-rk4z2\" (UID: \"7b536b83-e0ed-4705-b376-3280c2657213\") " pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.448054 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b536b83-e0ed-4705-b376-3280c2657213-v4-0-config-system-serving-cert\") pod \"oauth-openshift-75494747d9-rk4z2\" (UID: \"7b536b83-e0ed-4705-b376-3280c2657213\") " pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.448079 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jvwf\" (UniqueName: \"kubernetes.io/projected/7b536b83-e0ed-4705-b376-3280c2657213-kube-api-access-2jvwf\") pod \"oauth-openshift-75494747d9-rk4z2\" (UID: \"7b536b83-e0ed-4705-b376-3280c2657213\") " pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.448103 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7b536b83-e0ed-4705-b376-3280c2657213-v4-0-config-user-template-error\") pod \"oauth-openshift-75494747d9-rk4z2\" (UID: \"7b536b83-e0ed-4705-b376-3280c2657213\") " pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.448109 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7b536b83-e0ed-4705-b376-3280c2657213-audit-dir\") pod \"oauth-openshift-75494747d9-rk4z2\" (UID: \"7b536b83-e0ed-4705-b376-3280c2657213\") " pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.448141 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7b536b83-e0ed-4705-b376-3280c2657213-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-75494747d9-rk4z2\" (UID: \"7b536b83-e0ed-4705-b376-3280c2657213\") " pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.448175 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7b536b83-e0ed-4705-b376-3280c2657213-audit-policies\") pod \"oauth-openshift-75494747d9-rk4z2\" (UID: \"7b536b83-e0ed-4705-b376-3280c2657213\") " pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.448210 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b536b83-e0ed-4705-b376-3280c2657213-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-75494747d9-rk4z2\" (UID: \"7b536b83-e0ed-4705-b376-3280c2657213\") " pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.448244 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7b536b83-e0ed-4705-b376-3280c2657213-v4-0-config-system-cliconfig\") pod \"oauth-openshift-75494747d9-rk4z2\" (UID: \"7b536b83-e0ed-4705-b376-3280c2657213\") " pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.448265 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7b536b83-e0ed-4705-b376-3280c2657213-v4-0-config-system-service-ca\") pod \"oauth-openshift-75494747d9-rk4z2\" (UID: \"7b536b83-e0ed-4705-b376-3280c2657213\") " pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.448288 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7b536b83-e0ed-4705-b376-3280c2657213-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-75494747d9-rk4z2\" (UID: \"7b536b83-e0ed-4705-b376-3280c2657213\") " pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.448314 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7b536b83-e0ed-4705-b376-3280c2657213-v4-0-config-user-template-login\") pod \"oauth-openshift-75494747d9-rk4z2\" (UID: \"7b536b83-e0ed-4705-b376-3280c2657213\") " pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.448919 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7b536b83-e0ed-4705-b376-3280c2657213-audit-policies\") pod \"oauth-openshift-75494747d9-rk4z2\" (UID: \"7b536b83-e0ed-4705-b376-3280c2657213\") " pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.449012 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7b536b83-e0ed-4705-b376-3280c2657213-v4-0-config-system-cliconfig\") pod \"oauth-openshift-75494747d9-rk4z2\" (UID: \"7b536b83-e0ed-4705-b376-3280c2657213\") " pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.453728 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7b536b83-e0ed-4705-b376-3280c2657213-v4-0-config-system-session\") pod \"oauth-openshift-75494747d9-rk4z2\" (UID: \"7b536b83-e0ed-4705-b376-3280c2657213\") " pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.548998 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b536b83-e0ed-4705-b376-3280c2657213-v4-0-config-system-serving-cert\") pod \"oauth-openshift-75494747d9-rk4z2\" (UID: \"7b536b83-e0ed-4705-b376-3280c2657213\") " pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.549045 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jvwf\" (UniqueName: \"kubernetes.io/projected/7b536b83-e0ed-4705-b376-3280c2657213-kube-api-access-2jvwf\") pod \"oauth-openshift-75494747d9-rk4z2\" (UID: \"7b536b83-e0ed-4705-b376-3280c2657213\") " pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.549070 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7b536b83-e0ed-4705-b376-3280c2657213-v4-0-config-user-template-error\") pod \"oauth-openshift-75494747d9-rk4z2\" (UID: \"7b536b83-e0ed-4705-b376-3280c2657213\") " pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.549093 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7b536b83-e0ed-4705-b376-3280c2657213-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-75494747d9-rk4z2\" (UID: \"7b536b83-e0ed-4705-b376-3280c2657213\") " pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.549122 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b536b83-e0ed-4705-b376-3280c2657213-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-75494747d9-rk4z2\" (UID: \"7b536b83-e0ed-4705-b376-3280c2657213\") " pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.549160 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7b536b83-e0ed-4705-b376-3280c2657213-v4-0-config-system-service-ca\") pod \"oauth-openshift-75494747d9-rk4z2\" (UID: \"7b536b83-e0ed-4705-b376-3280c2657213\") " pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.549183 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7b536b83-e0ed-4705-b376-3280c2657213-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-75494747d9-rk4z2\" (UID: \"7b536b83-e0ed-4705-b376-3280c2657213\") " pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.549206 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7b536b83-e0ed-4705-b376-3280c2657213-v4-0-config-user-template-login\") pod \"oauth-openshift-75494747d9-rk4z2\" (UID: \"7b536b83-e0ed-4705-b376-3280c2657213\") " pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.549251 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7b536b83-e0ed-4705-b376-3280c2657213-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-75494747d9-rk4z2\" (UID: \"7b536b83-e0ed-4705-b376-3280c2657213\") " pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.549276 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7b536b83-e0ed-4705-b376-3280c2657213-v4-0-config-system-router-certs\") pod \"oauth-openshift-75494747d9-rk4z2\" (UID: \"7b536b83-e0ed-4705-b376-3280c2657213\") " pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.549936 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7b536b83-e0ed-4705-b376-3280c2657213-v4-0-config-system-service-ca\") pod \"oauth-openshift-75494747d9-rk4z2\" (UID: \"7b536b83-e0ed-4705-b376-3280c2657213\") " pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.551150 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b536b83-e0ed-4705-b376-3280c2657213-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-75494747d9-rk4z2\" (UID: \"7b536b83-e0ed-4705-b376-3280c2657213\") " pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.552302 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7b536b83-e0ed-4705-b376-3280c2657213-v4-0-config-system-router-certs\") pod \"oauth-openshift-75494747d9-rk4z2\" (UID: \"7b536b83-e0ed-4705-b376-3280c2657213\") " pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.555878 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7b536b83-e0ed-4705-b376-3280c2657213-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-75494747d9-rk4z2\" (UID: \"7b536b83-e0ed-4705-b376-3280c2657213\") " pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.556234 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7b536b83-e0ed-4705-b376-3280c2657213-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-75494747d9-rk4z2\" (UID: \"7b536b83-e0ed-4705-b376-3280c2657213\") " pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.557287 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b536b83-e0ed-4705-b376-3280c2657213-v4-0-config-system-serving-cert\") pod \"oauth-openshift-75494747d9-rk4z2\" (UID: \"7b536b83-e0ed-4705-b376-3280c2657213\") " pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.560496 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7b536b83-e0ed-4705-b376-3280c2657213-v4-0-config-user-template-login\") pod \"oauth-openshift-75494747d9-rk4z2\" (UID: \"7b536b83-e0ed-4705-b376-3280c2657213\") " pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.560634 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7b536b83-e0ed-4705-b376-3280c2657213-v4-0-config-user-template-error\") pod \"oauth-openshift-75494747d9-rk4z2\" (UID: \"7b536b83-e0ed-4705-b376-3280c2657213\") " pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.560745 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7b536b83-e0ed-4705-b376-3280c2657213-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-75494747d9-rk4z2\" (UID: \"7b536b83-e0ed-4705-b376-3280c2657213\") " pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.579034 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jvwf\" (UniqueName: \"kubernetes.io/projected/7b536b83-e0ed-4705-b376-3280c2657213-kube-api-access-2jvwf\") pod \"oauth-openshift-75494747d9-rk4z2\" (UID: \"7b536b83-e0ed-4705-b376-3280c2657213\") " pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:56 crc kubenswrapper[4789]: I1124 11:33:56.583479 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:57 crc kubenswrapper[4789]: I1124 11:33:57.000955 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-75494747d9-rk4z2"] Nov 24 11:33:57 crc kubenswrapper[4789]: I1124 11:33:57.656404 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" event={"ID":"7b536b83-e0ed-4705-b376-3280c2657213","Type":"ContainerStarted","Data":"3d2c4a0c411113ac641071e0d76450e7e0adf3090b3528beb158fd0419f86018"} Nov 24 11:33:57 crc kubenswrapper[4789]: I1124 11:33:57.656754 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:33:57 crc kubenswrapper[4789]: I1124 11:33:57.656767 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" event={"ID":"7b536b83-e0ed-4705-b376-3280c2657213","Type":"ContainerStarted","Data":"6dfd94770e93e8ccb9c494a3f376b9724d9e72ce9fb9a56a46b24e70a4320850"} Nov 24 11:33:57 crc kubenswrapper[4789]: I1124 11:33:57.681361 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" podStartSLOduration=29.681337378 podStartE2EDuration="29.681337378s" podCreationTimestamp="2025-11-24 11:33:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:33:57.677089087 +0000 UTC m=+220.259560466" watchObservedRunningTime="2025-11-24 11:33:57.681337378 +0000 UTC m=+220.263808777" Nov 24 11:33:57 crc kubenswrapper[4789]: I1124 11:33:57.738389 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-75494747d9-rk4z2" Nov 24 11:34:12 crc kubenswrapper[4789]: I1124 11:34:12.393975 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6tlsz"] Nov 24 11:34:12 crc kubenswrapper[4789]: I1124 11:34:12.399649 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-plzxk"] Nov 24 11:34:12 crc kubenswrapper[4789]: I1124 11:34:12.399958 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-plzxk" podUID="33ef3ee1-1338-4ca5-b290-ea83723c547e" containerName="registry-server" containerID="cri-o://9a98be62f1c83cade79bacad34bca857474fff5602398c05a7b93cb2009b1eb5" gracePeriod=30 Nov 24 11:34:12 crc kubenswrapper[4789]: I1124 11:34:12.400409 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6tlsz" podUID="de46ba5d-4892-4797-bec0-edb2aadce87f" containerName="registry-server" containerID="cri-o://b7657f6e14c4dec121b091739f66fe48632da23c161888660d6de3684122daaa" gracePeriod=30 Nov 24 11:34:12 crc kubenswrapper[4789]: I1124 11:34:12.418956 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xf9qh"] Nov 24 11:34:12 crc kubenswrapper[4789]: I1124 11:34:12.419363 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-xf9qh" podUID="48ee479a-ea6a-4831-858a-1cdfaca6762c" containerName="marketplace-operator" containerID="cri-o://79249acf6e5e50a690e93bb69241f6f7c3d7b4100da7a95869d349a43603f727" gracePeriod=30 Nov 24 11:34:12 crc kubenswrapper[4789]: I1124 11:34:12.438968 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7qw5"] Nov 24 11:34:12 crc kubenswrapper[4789]: I1124 11:34:12.439228 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k7qw5" podUID="f6e57c00-016a-45da-8988-927342153596" containerName="registry-server" containerID="cri-o://181779097f1f88b67bc74d3061c14a5380963f15ed1f8c5216829a19e9679c78" gracePeriod=30 Nov 24 11:34:12 crc kubenswrapper[4789]: I1124 11:34:12.442577 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dr4mx"] Nov 24 11:34:12 crc kubenswrapper[4789]: I1124 11:34:12.442870 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dr4mx" podUID="f7958781-e60c-4503-9aaf-a28078212e87" containerName="registry-server" containerID="cri-o://1cd454db2d29ec81815d17d93b6931de653070e6d21ad8be613a283a609fef0e" gracePeriod=30 Nov 24 11:34:12 crc kubenswrapper[4789]: I1124 11:34:12.459077 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pfvts"] Nov 24 11:34:12 crc kubenswrapper[4789]: I1124 11:34:12.459869 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pfvts" Nov 24 11:34:12 crc kubenswrapper[4789]: I1124 11:34:12.506034 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pfvts"] Nov 24 11:34:12 crc kubenswrapper[4789]: I1124 11:34:12.657826 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ce9631bf-85d8-411c-8dc8-612ed608cd07-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pfvts\" (UID: \"ce9631bf-85d8-411c-8dc8-612ed608cd07\") " pod="openshift-marketplace/marketplace-operator-79b997595-pfvts" Nov 24 11:34:12 crc kubenswrapper[4789]: I1124 11:34:12.658213 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwczr\" (UniqueName: \"kubernetes.io/projected/ce9631bf-85d8-411c-8dc8-612ed608cd07-kube-api-access-gwczr\") pod \"marketplace-operator-79b997595-pfvts\" (UID: \"ce9631bf-85d8-411c-8dc8-612ed608cd07\") " pod="openshift-marketplace/marketplace-operator-79b997595-pfvts" Nov 24 11:34:12 crc kubenswrapper[4789]: I1124 11:34:12.658288 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ce9631bf-85d8-411c-8dc8-612ed608cd07-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pfvts\" (UID: \"ce9631bf-85d8-411c-8dc8-612ed608cd07\") " pod="openshift-marketplace/marketplace-operator-79b997595-pfvts" Nov 24 11:34:12 crc kubenswrapper[4789]: I1124 11:34:12.748337 4789 generic.go:334] "Generic (PLEG): container finished" podID="48ee479a-ea6a-4831-858a-1cdfaca6762c" containerID="79249acf6e5e50a690e93bb69241f6f7c3d7b4100da7a95869d349a43603f727" exitCode=0 Nov 24 11:34:12 crc kubenswrapper[4789]: I1124 11:34:12.748417 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xf9qh" event={"ID":"48ee479a-ea6a-4831-858a-1cdfaca6762c","Type":"ContainerDied","Data":"79249acf6e5e50a690e93bb69241f6f7c3d7b4100da7a95869d349a43603f727"} Nov 24 11:34:12 crc kubenswrapper[4789]: I1124 11:34:12.752030 4789 generic.go:334] "Generic (PLEG): container finished" podID="33ef3ee1-1338-4ca5-b290-ea83723c547e" containerID="9a98be62f1c83cade79bacad34bca857474fff5602398c05a7b93cb2009b1eb5" exitCode=0 Nov 24 11:34:12 crc kubenswrapper[4789]: I1124 11:34:12.752102 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-plzxk" event={"ID":"33ef3ee1-1338-4ca5-b290-ea83723c547e","Type":"ContainerDied","Data":"9a98be62f1c83cade79bacad34bca857474fff5602398c05a7b93cb2009b1eb5"} Nov 24 11:34:12 crc kubenswrapper[4789]: I1124 11:34:12.756177 4789 generic.go:334] "Generic (PLEG): container finished" podID="de46ba5d-4892-4797-bec0-edb2aadce87f" containerID="b7657f6e14c4dec121b091739f66fe48632da23c161888660d6de3684122daaa" exitCode=0 Nov 24 11:34:12 crc kubenswrapper[4789]: I1124 11:34:12.756217 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6tlsz" event={"ID":"de46ba5d-4892-4797-bec0-edb2aadce87f","Type":"ContainerDied","Data":"b7657f6e14c4dec121b091739f66fe48632da23c161888660d6de3684122daaa"} Nov 24 11:34:12 crc kubenswrapper[4789]: I1124 11:34:12.759035 4789 generic.go:334] "Generic (PLEG): container finished" podID="f7958781-e60c-4503-9aaf-a28078212e87" containerID="1cd454db2d29ec81815d17d93b6931de653070e6d21ad8be613a283a609fef0e" exitCode=0 Nov 24 11:34:12 crc kubenswrapper[4789]: I1124 11:34:12.759083 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dr4mx" event={"ID":"f7958781-e60c-4503-9aaf-a28078212e87","Type":"ContainerDied","Data":"1cd454db2d29ec81815d17d93b6931de653070e6d21ad8be613a283a609fef0e"} Nov 24 11:34:12 crc kubenswrapper[4789]: I1124 11:34:12.761719 4789 generic.go:334] "Generic (PLEG): container finished" podID="f6e57c00-016a-45da-8988-927342153596" containerID="181779097f1f88b67bc74d3061c14a5380963f15ed1f8c5216829a19e9679c78" exitCode=0 Nov 24 11:34:12 crc kubenswrapper[4789]: I1124 11:34:12.761741 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7qw5" event={"ID":"f6e57c00-016a-45da-8988-927342153596","Type":"ContainerDied","Data":"181779097f1f88b67bc74d3061c14a5380963f15ed1f8c5216829a19e9679c78"} Nov 24 11:34:12 crc kubenswrapper[4789]: I1124 11:34:12.768896 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ce9631bf-85d8-411c-8dc8-612ed608cd07-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pfvts\" (UID: \"ce9631bf-85d8-411c-8dc8-612ed608cd07\") " pod="openshift-marketplace/marketplace-operator-79b997595-pfvts" Nov 24 11:34:12 crc kubenswrapper[4789]: I1124 11:34:12.769071 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ce9631bf-85d8-411c-8dc8-612ed608cd07-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pfvts\" (UID: \"ce9631bf-85d8-411c-8dc8-612ed608cd07\") " pod="openshift-marketplace/marketplace-operator-79b997595-pfvts" Nov 24 11:34:12 crc kubenswrapper[4789]: I1124 11:34:12.769128 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwczr\" (UniqueName: \"kubernetes.io/projected/ce9631bf-85d8-411c-8dc8-612ed608cd07-kube-api-access-gwczr\") pod \"marketplace-operator-79b997595-pfvts\" (UID: \"ce9631bf-85d8-411c-8dc8-612ed608cd07\") " pod="openshift-marketplace/marketplace-operator-79b997595-pfvts" Nov 24 11:34:12 crc kubenswrapper[4789]: I1124 11:34:12.771368 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ce9631bf-85d8-411c-8dc8-612ed608cd07-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pfvts\" (UID: \"ce9631bf-85d8-411c-8dc8-612ed608cd07\") " pod="openshift-marketplace/marketplace-operator-79b997595-pfvts" Nov 24 11:34:12 crc kubenswrapper[4789]: I1124 11:34:12.783341 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ce9631bf-85d8-411c-8dc8-612ed608cd07-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pfvts\" (UID: \"ce9631bf-85d8-411c-8dc8-612ed608cd07\") " pod="openshift-marketplace/marketplace-operator-79b997595-pfvts" Nov 24 11:34:12 crc kubenswrapper[4789]: I1124 11:34:12.785604 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwczr\" (UniqueName: \"kubernetes.io/projected/ce9631bf-85d8-411c-8dc8-612ed608cd07-kube-api-access-gwczr\") pod \"marketplace-operator-79b997595-pfvts\" (UID: \"ce9631bf-85d8-411c-8dc8-612ed608cd07\") " pod="openshift-marketplace/marketplace-operator-79b997595-pfvts" Nov 24 11:34:12 crc kubenswrapper[4789]: I1124 11:34:12.855999 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-plzxk" Nov 24 11:34:12 crc kubenswrapper[4789]: I1124 11:34:12.937553 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k7qw5" Nov 24 11:34:12 crc kubenswrapper[4789]: I1124 11:34:12.946916 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xf9qh" Nov 24 11:34:12 crc kubenswrapper[4789]: I1124 11:34:12.965528 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6tlsz" Nov 24 11:34:12 crc kubenswrapper[4789]: I1124 11:34:12.972548 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33ef3ee1-1338-4ca5-b290-ea83723c547e-utilities\") pod \"33ef3ee1-1338-4ca5-b290-ea83723c547e\" (UID: \"33ef3ee1-1338-4ca5-b290-ea83723c547e\") " Nov 24 11:34:12 crc kubenswrapper[4789]: I1124 11:34:12.972680 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33ef3ee1-1338-4ca5-b290-ea83723c547e-catalog-content\") pod \"33ef3ee1-1338-4ca5-b290-ea83723c547e\" (UID: \"33ef3ee1-1338-4ca5-b290-ea83723c547e\") " Nov 24 11:34:12 crc kubenswrapper[4789]: I1124 11:34:12.972776 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tcsq\" (UniqueName: \"kubernetes.io/projected/33ef3ee1-1338-4ca5-b290-ea83723c547e-kube-api-access-8tcsq\") pod \"33ef3ee1-1338-4ca5-b290-ea83723c547e\" (UID: \"33ef3ee1-1338-4ca5-b290-ea83723c547e\") " Nov 24 11:34:12 crc kubenswrapper[4789]: I1124 11:34:12.973715 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33ef3ee1-1338-4ca5-b290-ea83723c547e-utilities" (OuterVolumeSpecName: "utilities") pod "33ef3ee1-1338-4ca5-b290-ea83723c547e" (UID: "33ef3ee1-1338-4ca5-b290-ea83723c547e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:34:12 crc kubenswrapper[4789]: I1124 11:34:12.979703 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33ef3ee1-1338-4ca5-b290-ea83723c547e-kube-api-access-8tcsq" (OuterVolumeSpecName: "kube-api-access-8tcsq") pod "33ef3ee1-1338-4ca5-b290-ea83723c547e" (UID: "33ef3ee1-1338-4ca5-b290-ea83723c547e"). InnerVolumeSpecName "kube-api-access-8tcsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:34:12 crc kubenswrapper[4789]: I1124 11:34:12.980816 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dr4mx" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.023035 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pfvts" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.049567 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33ef3ee1-1338-4ca5-b290-ea83723c547e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "33ef3ee1-1338-4ca5-b290-ea83723c547e" (UID: "33ef3ee1-1338-4ca5-b290-ea83723c547e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.074778 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de46ba5d-4892-4797-bec0-edb2aadce87f-utilities\") pod \"de46ba5d-4892-4797-bec0-edb2aadce87f\" (UID: \"de46ba5d-4892-4797-bec0-edb2aadce87f\") " Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.074840 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48ee479a-ea6a-4831-858a-1cdfaca6762c-marketplace-trusted-ca\") pod \"48ee479a-ea6a-4831-858a-1cdfaca6762c\" (UID: \"48ee479a-ea6a-4831-858a-1cdfaca6762c\") " Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.074864 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6e57c00-016a-45da-8988-927342153596-catalog-content\") pod \"f6e57c00-016a-45da-8988-927342153596\" (UID: \"f6e57c00-016a-45da-8988-927342153596\") " Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.074884 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xbvf\" (UniqueName: \"kubernetes.io/projected/48ee479a-ea6a-4831-858a-1cdfaca6762c-kube-api-access-4xbvf\") pod \"48ee479a-ea6a-4831-858a-1cdfaca6762c\" (UID: \"48ee479a-ea6a-4831-858a-1cdfaca6762c\") " Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.074899 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de46ba5d-4892-4797-bec0-edb2aadce87f-catalog-content\") pod \"de46ba5d-4892-4797-bec0-edb2aadce87f\" (UID: \"de46ba5d-4892-4797-bec0-edb2aadce87f\") " Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.074921 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vcjw\" (UniqueName: \"kubernetes.io/projected/de46ba5d-4892-4797-bec0-edb2aadce87f-kube-api-access-9vcjw\") pod \"de46ba5d-4892-4797-bec0-edb2aadce87f\" (UID: \"de46ba5d-4892-4797-bec0-edb2aadce87f\") " Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.074981 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6e57c00-016a-45da-8988-927342153596-utilities\") pod \"f6e57c00-016a-45da-8988-927342153596\" (UID: \"f6e57c00-016a-45da-8988-927342153596\") " Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.075000 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7958781-e60c-4503-9aaf-a28078212e87-utilities\") pod \"f7958781-e60c-4503-9aaf-a28078212e87\" (UID: \"f7958781-e60c-4503-9aaf-a28078212e87\") " Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.075029 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzq8j\" (UniqueName: \"kubernetes.io/projected/f6e57c00-016a-45da-8988-927342153596-kube-api-access-vzq8j\") pod \"f6e57c00-016a-45da-8988-927342153596\" (UID: \"f6e57c00-016a-45da-8988-927342153596\") " Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.075043 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7958781-e60c-4503-9aaf-a28078212e87-catalog-content\") pod \"f7958781-e60c-4503-9aaf-a28078212e87\" (UID: \"f7958781-e60c-4503-9aaf-a28078212e87\") " Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.075065 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p72x8\" (UniqueName: \"kubernetes.io/projected/f7958781-e60c-4503-9aaf-a28078212e87-kube-api-access-p72x8\") pod \"f7958781-e60c-4503-9aaf-a28078212e87\" (UID: \"f7958781-e60c-4503-9aaf-a28078212e87\") " Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.075092 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/48ee479a-ea6a-4831-858a-1cdfaca6762c-marketplace-operator-metrics\") pod \"48ee479a-ea6a-4831-858a-1cdfaca6762c\" (UID: \"48ee479a-ea6a-4831-858a-1cdfaca6762c\") " Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.075283 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33ef3ee1-1338-4ca5-b290-ea83723c547e-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.075294 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33ef3ee1-1338-4ca5-b290-ea83723c547e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.075304 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tcsq\" (UniqueName: \"kubernetes.io/projected/33ef3ee1-1338-4ca5-b290-ea83723c547e-kube-api-access-8tcsq\") on node \"crc\" DevicePath \"\"" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.076250 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48ee479a-ea6a-4831-858a-1cdfaca6762c-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "48ee479a-ea6a-4831-858a-1cdfaca6762c" (UID: "48ee479a-ea6a-4831-858a-1cdfaca6762c"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.079314 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48ee479a-ea6a-4831-858a-1cdfaca6762c-kube-api-access-4xbvf" (OuterVolumeSpecName: "kube-api-access-4xbvf") pod "48ee479a-ea6a-4831-858a-1cdfaca6762c" (UID: "48ee479a-ea6a-4831-858a-1cdfaca6762c"). InnerVolumeSpecName "kube-api-access-4xbvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.081630 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de46ba5d-4892-4797-bec0-edb2aadce87f-kube-api-access-9vcjw" (OuterVolumeSpecName: "kube-api-access-9vcjw") pod "de46ba5d-4892-4797-bec0-edb2aadce87f" (UID: "de46ba5d-4892-4797-bec0-edb2aadce87f"). InnerVolumeSpecName "kube-api-access-9vcjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.082303 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7958781-e60c-4503-9aaf-a28078212e87-utilities" (OuterVolumeSpecName: "utilities") pod "f7958781-e60c-4503-9aaf-a28078212e87" (UID: "f7958781-e60c-4503-9aaf-a28078212e87"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.087906 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7958781-e60c-4503-9aaf-a28078212e87-kube-api-access-p72x8" (OuterVolumeSpecName: "kube-api-access-p72x8") pod "f7958781-e60c-4503-9aaf-a28078212e87" (UID: "f7958781-e60c-4503-9aaf-a28078212e87"). InnerVolumeSpecName "kube-api-access-p72x8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.095168 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6e57c00-016a-45da-8988-927342153596-utilities" (OuterVolumeSpecName: "utilities") pod "f6e57c00-016a-45da-8988-927342153596" (UID: "f6e57c00-016a-45da-8988-927342153596"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.095535 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48ee479a-ea6a-4831-858a-1cdfaca6762c-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "48ee479a-ea6a-4831-858a-1cdfaca6762c" (UID: "48ee479a-ea6a-4831-858a-1cdfaca6762c"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.099270 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6e57c00-016a-45da-8988-927342153596-kube-api-access-vzq8j" (OuterVolumeSpecName: "kube-api-access-vzq8j") pod "f6e57c00-016a-45da-8988-927342153596" (UID: "f6e57c00-016a-45da-8988-927342153596"). InnerVolumeSpecName "kube-api-access-vzq8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.100746 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de46ba5d-4892-4797-bec0-edb2aadce87f-utilities" (OuterVolumeSpecName: "utilities") pod "de46ba5d-4892-4797-bec0-edb2aadce87f" (UID: "de46ba5d-4892-4797-bec0-edb2aadce87f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.107584 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6e57c00-016a-45da-8988-927342153596-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6e57c00-016a-45da-8988-927342153596" (UID: "f6e57c00-016a-45da-8988-927342153596"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.131539 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de46ba5d-4892-4797-bec0-edb2aadce87f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de46ba5d-4892-4797-bec0-edb2aadce87f" (UID: "de46ba5d-4892-4797-bec0-edb2aadce87f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.179120 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de46ba5d-4892-4797-bec0-edb2aadce87f-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.179614 4789 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48ee479a-ea6a-4831-858a-1cdfaca6762c-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.179662 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6e57c00-016a-45da-8988-927342153596-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.179675 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xbvf\" (UniqueName: \"kubernetes.io/projected/48ee479a-ea6a-4831-858a-1cdfaca6762c-kube-api-access-4xbvf\") on node \"crc\" DevicePath \"\"" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.179686 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de46ba5d-4892-4797-bec0-edb2aadce87f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.179697 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vcjw\" (UniqueName: \"kubernetes.io/projected/de46ba5d-4892-4797-bec0-edb2aadce87f-kube-api-access-9vcjw\") on node \"crc\" DevicePath \"\"" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.179708 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6e57c00-016a-45da-8988-927342153596-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.179743 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7958781-e60c-4503-9aaf-a28078212e87-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.179756 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzq8j\" (UniqueName: \"kubernetes.io/projected/f6e57c00-016a-45da-8988-927342153596-kube-api-access-vzq8j\") on node \"crc\" DevicePath \"\"" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.179783 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p72x8\" (UniqueName: \"kubernetes.io/projected/f7958781-e60c-4503-9aaf-a28078212e87-kube-api-access-p72x8\") on node \"crc\" DevicePath \"\"" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.179795 4789 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/48ee479a-ea6a-4831-858a-1cdfaca6762c-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.211331 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7958781-e60c-4503-9aaf-a28078212e87-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7958781-e60c-4503-9aaf-a28078212e87" (UID: "f7958781-e60c-4503-9aaf-a28078212e87"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.220585 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pfvts"] Nov 24 11:34:13 crc kubenswrapper[4789]: W1124 11:34:13.226870 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce9631bf_85d8_411c_8dc8_612ed608cd07.slice/crio-d77946c2dbd36cc46a46cbfbb9b76c4982ea7b17ff93611dc2e9835459f4666c WatchSource:0}: Error finding container d77946c2dbd36cc46a46cbfbb9b76c4982ea7b17ff93611dc2e9835459f4666c: Status 404 returned error can't find the container with id d77946c2dbd36cc46a46cbfbb9b76c4982ea7b17ff93611dc2e9835459f4666c Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.282333 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7958781-e60c-4503-9aaf-a28078212e87-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.767897 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-plzxk" event={"ID":"33ef3ee1-1338-4ca5-b290-ea83723c547e","Type":"ContainerDied","Data":"7288a03aed5d2f5c2f7bcf16314dee7f257a9b1eff6cc32e1fedecb5de4ebf80"} Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.767956 4789 scope.go:117] "RemoveContainer" containerID="9a98be62f1c83cade79bacad34bca857474fff5602398c05a7b93cb2009b1eb5" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.768094 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-plzxk" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.774941 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6tlsz" event={"ID":"de46ba5d-4892-4797-bec0-edb2aadce87f","Type":"ContainerDied","Data":"5cd99ba550c1374847bc71b6d928f41af8d88fa51e4967c3dd357a29e5056ba9"} Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.775001 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6tlsz" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.776853 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dr4mx" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.776856 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dr4mx" event={"ID":"f7958781-e60c-4503-9aaf-a28078212e87","Type":"ContainerDied","Data":"e47b33da9cd2f776fbeca59879c418740749f80bd18a3e6a293d443b7ce8fada"} Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.783403 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xf9qh" event={"ID":"48ee479a-ea6a-4831-858a-1cdfaca6762c","Type":"ContainerDied","Data":"c39475e2940c01ab00639ad20049fd51c057485ed8a9347a00f7125681397428"} Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.783532 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xf9qh" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.789732 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7qw5" event={"ID":"f6e57c00-016a-45da-8988-927342153596","Type":"ContainerDied","Data":"8415a60f117495bcfc19b2a40c9ec0ae74b129d573df0c2dcc03299d7b664e03"} Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.790439 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k7qw5" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.794850 4789 scope.go:117] "RemoveContainer" containerID="8bbaeeb1a2f65e0a1bb5adefcb2d84d576a57d151ee51fa8a48026e5c0d67e31" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.800097 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pfvts" event={"ID":"ce9631bf-85d8-411c-8dc8-612ed608cd07","Type":"ContainerStarted","Data":"2c36bf006450a54d03235b31cc3d923312839e0498f2aa61bdcee0ec34bc7718"} Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.800230 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pfvts" event={"ID":"ce9631bf-85d8-411c-8dc8-612ed608cd07","Type":"ContainerStarted","Data":"d77946c2dbd36cc46a46cbfbb9b76c4982ea7b17ff93611dc2e9835459f4666c"} Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.800397 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-pfvts" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.810329 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-pfvts" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.817570 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6tlsz"] Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.820792 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6tlsz"] Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.824868 4789 scope.go:117] "RemoveContainer" containerID="bce30429f0622abc36c590a75290ff414c6740a6132911eef84810f640e59ad3" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.840571 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dr4mx"] Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.843913 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dr4mx"] Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.853764 4789 scope.go:117] "RemoveContainer" containerID="b7657f6e14c4dec121b091739f66fe48632da23c161888660d6de3684122daaa" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.860450 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xf9qh"] Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.865864 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xf9qh"] Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.876616 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-plzxk"] Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.876707 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-plzxk"] Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.876987 4789 scope.go:117] "RemoveContainer" containerID="6dca67c7b50ab2cd1703a094bc65368754021936cd8fb6b4938d0dda848922c4" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.898319 4789 scope.go:117] "RemoveContainer" containerID="778eb862fac4f3bca96c7ed9ff6594ec39f8b4ddfa1b0b30bc74d32077e81659" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.901964 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-pfvts" podStartSLOduration=1.9019465709999999 podStartE2EDuration="1.901946571s" podCreationTimestamp="2025-11-24 11:34:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:34:13.897128666 +0000 UTC m=+236.479600045" watchObservedRunningTime="2025-11-24 11:34:13.901946571 +0000 UTC m=+236.484417950" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.928535 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7qw5"] Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.937467 4789 scope.go:117] "RemoveContainer" containerID="1cd454db2d29ec81815d17d93b6931de653070e6d21ad8be613a283a609fef0e" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.941895 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7qw5"] Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.951763 4789 scope.go:117] "RemoveContainer" containerID="4b5692ebcc366096235ee172f5b27f7682545a0a96787b60201c15ca4dc0da2f" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.966313 4789 scope.go:117] "RemoveContainer" containerID="0e8532e947182b2765036556b574252db6fd7420bc25bcaf1de4d6a3efd247df" Nov 24 11:34:13 crc kubenswrapper[4789]: I1124 11:34:13.980162 4789 scope.go:117] "RemoveContainer" containerID="79249acf6e5e50a690e93bb69241f6f7c3d7b4100da7a95869d349a43603f727" Nov 24 11:34:14 crc kubenswrapper[4789]: I1124 11:34:14.004387 4789 scope.go:117] "RemoveContainer" containerID="181779097f1f88b67bc74d3061c14a5380963f15ed1f8c5216829a19e9679c78" Nov 24 11:34:14 crc kubenswrapper[4789]: I1124 11:34:14.020556 4789 scope.go:117] "RemoveContainer" containerID="b4b3e1d01217f002a3668f3a85ee13ef810ee3cd8d812ae738ba8ccc4c0d4d3f" Nov 24 11:34:14 crc kubenswrapper[4789]: I1124 11:34:14.033209 4789 scope.go:117] "RemoveContainer" containerID="d402807b27660c07bba28dcce8e08cb52d9c55d151868eb970a0306e2409aeab" Nov 24 11:34:14 crc kubenswrapper[4789]: I1124 11:34:14.180185 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33ef3ee1-1338-4ca5-b290-ea83723c547e" path="/var/lib/kubelet/pods/33ef3ee1-1338-4ca5-b290-ea83723c547e/volumes" Nov 24 11:34:14 crc kubenswrapper[4789]: I1124 11:34:14.180782 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48ee479a-ea6a-4831-858a-1cdfaca6762c" path="/var/lib/kubelet/pods/48ee479a-ea6a-4831-858a-1cdfaca6762c/volumes" Nov 24 11:34:14 crc kubenswrapper[4789]: I1124 11:34:14.181180 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de46ba5d-4892-4797-bec0-edb2aadce87f" path="/var/lib/kubelet/pods/de46ba5d-4892-4797-bec0-edb2aadce87f/volumes" Nov 24 11:34:14 crc kubenswrapper[4789]: I1124 11:34:14.181727 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6e57c00-016a-45da-8988-927342153596" path="/var/lib/kubelet/pods/f6e57c00-016a-45da-8988-927342153596/volumes" Nov 24 11:34:14 crc kubenswrapper[4789]: I1124 11:34:14.182251 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7958781-e60c-4503-9aaf-a28078212e87" path="/var/lib/kubelet/pods/f7958781-e60c-4503-9aaf-a28078212e87/volumes" Nov 24 11:34:14 crc kubenswrapper[4789]: I1124 11:34:14.411640 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c7xwt"] Nov 24 11:34:14 crc kubenswrapper[4789]: E1124 11:34:14.411816 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7958781-e60c-4503-9aaf-a28078212e87" containerName="extract-utilities" Nov 24 11:34:14 crc kubenswrapper[4789]: I1124 11:34:14.411827 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7958781-e60c-4503-9aaf-a28078212e87" containerName="extract-utilities" Nov 24 11:34:14 crc kubenswrapper[4789]: E1124 11:34:14.411838 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de46ba5d-4892-4797-bec0-edb2aadce87f" containerName="registry-server" Nov 24 11:34:14 crc kubenswrapper[4789]: I1124 11:34:14.411846 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="de46ba5d-4892-4797-bec0-edb2aadce87f" containerName="registry-server" Nov 24 11:34:14 crc kubenswrapper[4789]: E1124 11:34:14.411858 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6e57c00-016a-45da-8988-927342153596" containerName="extract-utilities" Nov 24 11:34:14 crc kubenswrapper[4789]: I1124 11:34:14.411863 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6e57c00-016a-45da-8988-927342153596" containerName="extract-utilities" Nov 24 11:34:14 crc kubenswrapper[4789]: E1124 11:34:14.411872 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48ee479a-ea6a-4831-858a-1cdfaca6762c" containerName="marketplace-operator" Nov 24 11:34:14 crc kubenswrapper[4789]: I1124 11:34:14.411877 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="48ee479a-ea6a-4831-858a-1cdfaca6762c" containerName="marketplace-operator" Nov 24 11:34:14 crc kubenswrapper[4789]: E1124 11:34:14.411886 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7958781-e60c-4503-9aaf-a28078212e87" containerName="registry-server" Nov 24 11:34:14 crc kubenswrapper[4789]: I1124 11:34:14.411892 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7958781-e60c-4503-9aaf-a28078212e87" containerName="registry-server" Nov 24 11:34:14 crc kubenswrapper[4789]: E1124 11:34:14.411900 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33ef3ee1-1338-4ca5-b290-ea83723c547e" containerName="extract-content" Nov 24 11:34:14 crc kubenswrapper[4789]: I1124 11:34:14.411906 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="33ef3ee1-1338-4ca5-b290-ea83723c547e" containerName="extract-content" Nov 24 11:34:14 crc kubenswrapper[4789]: E1124 11:34:14.411913 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33ef3ee1-1338-4ca5-b290-ea83723c547e" containerName="registry-server" Nov 24 11:34:14 crc kubenswrapper[4789]: I1124 11:34:14.411919 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="33ef3ee1-1338-4ca5-b290-ea83723c547e" containerName="registry-server" Nov 24 11:34:14 crc kubenswrapper[4789]: E1124 11:34:14.411928 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de46ba5d-4892-4797-bec0-edb2aadce87f" containerName="extract-utilities" Nov 24 11:34:14 crc kubenswrapper[4789]: I1124 11:34:14.411933 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="de46ba5d-4892-4797-bec0-edb2aadce87f" containerName="extract-utilities" Nov 24 11:34:14 crc kubenswrapper[4789]: E1124 11:34:14.411940 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6e57c00-016a-45da-8988-927342153596" containerName="extract-content" Nov 24 11:34:14 crc kubenswrapper[4789]: I1124 11:34:14.411946 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6e57c00-016a-45da-8988-927342153596" containerName="extract-content" Nov 24 11:34:14 crc kubenswrapper[4789]: E1124 11:34:14.411953 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de46ba5d-4892-4797-bec0-edb2aadce87f" containerName="extract-content" Nov 24 11:34:14 crc kubenswrapper[4789]: I1124 11:34:14.411958 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="de46ba5d-4892-4797-bec0-edb2aadce87f" containerName="extract-content" Nov 24 11:34:14 crc kubenswrapper[4789]: E1124 11:34:14.411966 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7958781-e60c-4503-9aaf-a28078212e87" containerName="extract-content" Nov 24 11:34:14 crc kubenswrapper[4789]: I1124 11:34:14.411971 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7958781-e60c-4503-9aaf-a28078212e87" containerName="extract-content" Nov 24 11:34:14 crc kubenswrapper[4789]: E1124 11:34:14.411980 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6e57c00-016a-45da-8988-927342153596" containerName="registry-server" Nov 24 11:34:14 crc kubenswrapper[4789]: I1124 11:34:14.411986 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6e57c00-016a-45da-8988-927342153596" containerName="registry-server" Nov 24 11:34:14 crc kubenswrapper[4789]: E1124 11:34:14.411994 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33ef3ee1-1338-4ca5-b290-ea83723c547e" containerName="extract-utilities" Nov 24 11:34:14 crc kubenswrapper[4789]: I1124 11:34:14.411999 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="33ef3ee1-1338-4ca5-b290-ea83723c547e" containerName="extract-utilities" Nov 24 11:34:14 crc kubenswrapper[4789]: I1124 11:34:14.412074 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6e57c00-016a-45da-8988-927342153596" containerName="registry-server" Nov 24 11:34:14 crc kubenswrapper[4789]: I1124 11:34:14.412085 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7958781-e60c-4503-9aaf-a28078212e87" containerName="registry-server" Nov 24 11:34:14 crc kubenswrapper[4789]: I1124 11:34:14.412092 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="de46ba5d-4892-4797-bec0-edb2aadce87f" containerName="registry-server" Nov 24 11:34:14 crc kubenswrapper[4789]: I1124 11:34:14.412103 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="33ef3ee1-1338-4ca5-b290-ea83723c547e" containerName="registry-server" Nov 24 11:34:14 crc kubenswrapper[4789]: I1124 11:34:14.412109 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="48ee479a-ea6a-4831-858a-1cdfaca6762c" containerName="marketplace-operator" Nov 24 11:34:14 crc kubenswrapper[4789]: I1124 11:34:14.412746 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c7xwt" Nov 24 11:34:14 crc kubenswrapper[4789]: I1124 11:34:14.414642 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 24 11:34:14 crc kubenswrapper[4789]: I1124 11:34:14.419733 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd57300a-9489-4148-8c58-89477b5d9af4-catalog-content\") pod \"redhat-marketplace-c7xwt\" (UID: \"dd57300a-9489-4148-8c58-89477b5d9af4\") " pod="openshift-marketplace/redhat-marketplace-c7xwt" Nov 24 11:34:14 crc kubenswrapper[4789]: I1124 11:34:14.419788 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd57300a-9489-4148-8c58-89477b5d9af4-utilities\") pod \"redhat-marketplace-c7xwt\" (UID: \"dd57300a-9489-4148-8c58-89477b5d9af4\") " pod="openshift-marketplace/redhat-marketplace-c7xwt" Nov 24 11:34:14 crc kubenswrapper[4789]: I1124 11:34:14.419888 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9cgb\" (UniqueName: \"kubernetes.io/projected/dd57300a-9489-4148-8c58-89477b5d9af4-kube-api-access-h9cgb\") pod \"redhat-marketplace-c7xwt\" (UID: \"dd57300a-9489-4148-8c58-89477b5d9af4\") " pod="openshift-marketplace/redhat-marketplace-c7xwt" Nov 24 11:34:14 crc kubenswrapper[4789]: I1124 11:34:14.434243 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c7xwt"] Nov 24 11:34:14 crc kubenswrapper[4789]: I1124 11:34:14.520486 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd57300a-9489-4148-8c58-89477b5d9af4-catalog-content\") pod \"redhat-marketplace-c7xwt\" (UID: \"dd57300a-9489-4148-8c58-89477b5d9af4\") " pod="openshift-marketplace/redhat-marketplace-c7xwt" Nov 24 11:34:14 crc kubenswrapper[4789]: I1124 11:34:14.520556 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd57300a-9489-4148-8c58-89477b5d9af4-utilities\") pod \"redhat-marketplace-c7xwt\" (UID: \"dd57300a-9489-4148-8c58-89477b5d9af4\") " pod="openshift-marketplace/redhat-marketplace-c7xwt" Nov 24 11:34:14 crc kubenswrapper[4789]: I1124 11:34:14.520582 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9cgb\" (UniqueName: \"kubernetes.io/projected/dd57300a-9489-4148-8c58-89477b5d9af4-kube-api-access-h9cgb\") pod \"redhat-marketplace-c7xwt\" (UID: \"dd57300a-9489-4148-8c58-89477b5d9af4\") " pod="openshift-marketplace/redhat-marketplace-c7xwt" Nov 24 11:34:14 crc kubenswrapper[4789]: I1124 11:34:14.520952 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd57300a-9489-4148-8c58-89477b5d9af4-catalog-content\") pod \"redhat-marketplace-c7xwt\" (UID: \"dd57300a-9489-4148-8c58-89477b5d9af4\") " pod="openshift-marketplace/redhat-marketplace-c7xwt" Nov 24 11:34:14 crc kubenswrapper[4789]: I1124 11:34:14.521309 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd57300a-9489-4148-8c58-89477b5d9af4-utilities\") pod \"redhat-marketplace-c7xwt\" (UID: \"dd57300a-9489-4148-8c58-89477b5d9af4\") " pod="openshift-marketplace/redhat-marketplace-c7xwt" Nov 24 11:34:14 crc kubenswrapper[4789]: I1124 11:34:14.545349 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9cgb\" (UniqueName: \"kubernetes.io/projected/dd57300a-9489-4148-8c58-89477b5d9af4-kube-api-access-h9cgb\") pod \"redhat-marketplace-c7xwt\" (UID: \"dd57300a-9489-4148-8c58-89477b5d9af4\") " pod="openshift-marketplace/redhat-marketplace-c7xwt" Nov 24 11:34:14 crc kubenswrapper[4789]: I1124 11:34:14.744965 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c7xwt" Nov 24 11:34:15 crc kubenswrapper[4789]: I1124 11:34:15.013843 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4nkmf"] Nov 24 11:34:15 crc kubenswrapper[4789]: I1124 11:34:15.015281 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4nkmf" Nov 24 11:34:15 crc kubenswrapper[4789]: I1124 11:34:15.018074 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 24 11:34:15 crc kubenswrapper[4789]: I1124 11:34:15.018965 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4nkmf"] Nov 24 11:34:15 crc kubenswrapper[4789]: I1124 11:34:15.128133 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tctq4\" (UniqueName: \"kubernetes.io/projected/6b306b4d-a5ff-4c9c-b070-967f57a7e0fc-kube-api-access-tctq4\") pod \"certified-operators-4nkmf\" (UID: \"6b306b4d-a5ff-4c9c-b070-967f57a7e0fc\") " pod="openshift-marketplace/certified-operators-4nkmf" Nov 24 11:34:15 crc kubenswrapper[4789]: I1124 11:34:15.128260 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b306b4d-a5ff-4c9c-b070-967f57a7e0fc-catalog-content\") pod \"certified-operators-4nkmf\" (UID: \"6b306b4d-a5ff-4c9c-b070-967f57a7e0fc\") " pod="openshift-marketplace/certified-operators-4nkmf" Nov 24 11:34:15 crc kubenswrapper[4789]: I1124 11:34:15.128294 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b306b4d-a5ff-4c9c-b070-967f57a7e0fc-utilities\") pod \"certified-operators-4nkmf\" (UID: \"6b306b4d-a5ff-4c9c-b070-967f57a7e0fc\") " pod="openshift-marketplace/certified-operators-4nkmf" Nov 24 11:34:15 crc kubenswrapper[4789]: I1124 11:34:15.141391 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c7xwt"] Nov 24 11:34:15 crc kubenswrapper[4789]: W1124 11:34:15.149543 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd57300a_9489_4148_8c58_89477b5d9af4.slice/crio-89a030763b9437b953520b2af83d4cd091749c9d57e8eb4dac0cd9012e00defb WatchSource:0}: Error finding container 89a030763b9437b953520b2af83d4cd091749c9d57e8eb4dac0cd9012e00defb: Status 404 returned error can't find the container with id 89a030763b9437b953520b2af83d4cd091749c9d57e8eb4dac0cd9012e00defb Nov 24 11:34:15 crc kubenswrapper[4789]: I1124 11:34:15.229991 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b306b4d-a5ff-4c9c-b070-967f57a7e0fc-utilities\") pod \"certified-operators-4nkmf\" (UID: \"6b306b4d-a5ff-4c9c-b070-967f57a7e0fc\") " pod="openshift-marketplace/certified-operators-4nkmf" Nov 24 11:34:15 crc kubenswrapper[4789]: I1124 11:34:15.230052 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tctq4\" (UniqueName: \"kubernetes.io/projected/6b306b4d-a5ff-4c9c-b070-967f57a7e0fc-kube-api-access-tctq4\") pod \"certified-operators-4nkmf\" (UID: \"6b306b4d-a5ff-4c9c-b070-967f57a7e0fc\") " pod="openshift-marketplace/certified-operators-4nkmf" Nov 24 11:34:15 crc kubenswrapper[4789]: I1124 11:34:15.230159 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b306b4d-a5ff-4c9c-b070-967f57a7e0fc-catalog-content\") pod \"certified-operators-4nkmf\" (UID: \"6b306b4d-a5ff-4c9c-b070-967f57a7e0fc\") " pod="openshift-marketplace/certified-operators-4nkmf" Nov 24 11:34:15 crc kubenswrapper[4789]: I1124 11:34:15.230536 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b306b4d-a5ff-4c9c-b070-967f57a7e0fc-utilities\") pod \"certified-operators-4nkmf\" (UID: \"6b306b4d-a5ff-4c9c-b070-967f57a7e0fc\") " pod="openshift-marketplace/certified-operators-4nkmf" Nov 24 11:34:15 crc kubenswrapper[4789]: I1124 11:34:15.231325 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b306b4d-a5ff-4c9c-b070-967f57a7e0fc-catalog-content\") pod \"certified-operators-4nkmf\" (UID: \"6b306b4d-a5ff-4c9c-b070-967f57a7e0fc\") " pod="openshift-marketplace/certified-operators-4nkmf" Nov 24 11:34:15 crc kubenswrapper[4789]: I1124 11:34:15.248410 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tctq4\" (UniqueName: \"kubernetes.io/projected/6b306b4d-a5ff-4c9c-b070-967f57a7e0fc-kube-api-access-tctq4\") pod \"certified-operators-4nkmf\" (UID: \"6b306b4d-a5ff-4c9c-b070-967f57a7e0fc\") " pod="openshift-marketplace/certified-operators-4nkmf" Nov 24 11:34:15 crc kubenswrapper[4789]: I1124 11:34:15.336005 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4nkmf" Nov 24 11:34:15 crc kubenswrapper[4789]: I1124 11:34:15.534826 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4nkmf"] Nov 24 11:34:15 crc kubenswrapper[4789]: I1124 11:34:15.815954 4789 generic.go:334] "Generic (PLEG): container finished" podID="dd57300a-9489-4148-8c58-89477b5d9af4" containerID="6bf9c9be2527af996c2f11230bafdaec64109cd08564dee484a7681d4beb7443" exitCode=0 Nov 24 11:34:15 crc kubenswrapper[4789]: I1124 11:34:15.816025 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c7xwt" event={"ID":"dd57300a-9489-4148-8c58-89477b5d9af4","Type":"ContainerDied","Data":"6bf9c9be2527af996c2f11230bafdaec64109cd08564dee484a7681d4beb7443"} Nov 24 11:34:15 crc kubenswrapper[4789]: I1124 11:34:15.816297 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c7xwt" event={"ID":"dd57300a-9489-4148-8c58-89477b5d9af4","Type":"ContainerStarted","Data":"89a030763b9437b953520b2af83d4cd091749c9d57e8eb4dac0cd9012e00defb"} Nov 24 11:34:15 crc kubenswrapper[4789]: I1124 11:34:15.818384 4789 generic.go:334] "Generic (PLEG): container finished" podID="6b306b4d-a5ff-4c9c-b070-967f57a7e0fc" containerID="d26f4d2ae59990d29af02d910bd595b00467c17e0af081872c05a9bb6aebf9ed" exitCode=0 Nov 24 11:34:15 crc kubenswrapper[4789]: I1124 11:34:15.819260 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4nkmf" event={"ID":"6b306b4d-a5ff-4c9c-b070-967f57a7e0fc","Type":"ContainerDied","Data":"d26f4d2ae59990d29af02d910bd595b00467c17e0af081872c05a9bb6aebf9ed"} Nov 24 11:34:15 crc kubenswrapper[4789]: I1124 11:34:15.819309 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4nkmf" event={"ID":"6b306b4d-a5ff-4c9c-b070-967f57a7e0fc","Type":"ContainerStarted","Data":"539a0d7915f5d349b59261241b224b9bcbfa9be82c4e50a5ebde5fc2345ba71e"} Nov 24 11:34:16 crc kubenswrapper[4789]: I1124 11:34:16.824237 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dns7w"] Nov 24 11:34:16 crc kubenswrapper[4789]: I1124 11:34:16.828141 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dns7w" Nov 24 11:34:16 crc kubenswrapper[4789]: I1124 11:34:16.828708 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dns7w"] Nov 24 11:34:16 crc kubenswrapper[4789]: I1124 11:34:16.832295 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 24 11:34:16 crc kubenswrapper[4789]: I1124 11:34:16.852086 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9n7c\" (UniqueName: \"kubernetes.io/projected/5cab2fff-81a2-48d7-b216-28abaf890739-kube-api-access-d9n7c\") pod \"redhat-operators-dns7w\" (UID: \"5cab2fff-81a2-48d7-b216-28abaf890739\") " pod="openshift-marketplace/redhat-operators-dns7w" Nov 24 11:34:16 crc kubenswrapper[4789]: I1124 11:34:16.852167 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cab2fff-81a2-48d7-b216-28abaf890739-catalog-content\") pod \"redhat-operators-dns7w\" (UID: \"5cab2fff-81a2-48d7-b216-28abaf890739\") " pod="openshift-marketplace/redhat-operators-dns7w" Nov 24 11:34:16 crc kubenswrapper[4789]: I1124 11:34:16.852243 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cab2fff-81a2-48d7-b216-28abaf890739-utilities\") pod \"redhat-operators-dns7w\" (UID: \"5cab2fff-81a2-48d7-b216-28abaf890739\") " pod="openshift-marketplace/redhat-operators-dns7w" Nov 24 11:34:16 crc kubenswrapper[4789]: I1124 11:34:16.952598 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9n7c\" (UniqueName: \"kubernetes.io/projected/5cab2fff-81a2-48d7-b216-28abaf890739-kube-api-access-d9n7c\") pod \"redhat-operators-dns7w\" (UID: \"5cab2fff-81a2-48d7-b216-28abaf890739\") " pod="openshift-marketplace/redhat-operators-dns7w" Nov 24 11:34:16 crc kubenswrapper[4789]: I1124 11:34:16.952958 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cab2fff-81a2-48d7-b216-28abaf890739-catalog-content\") pod \"redhat-operators-dns7w\" (UID: \"5cab2fff-81a2-48d7-b216-28abaf890739\") " pod="openshift-marketplace/redhat-operators-dns7w" Nov 24 11:34:16 crc kubenswrapper[4789]: I1124 11:34:16.952987 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cab2fff-81a2-48d7-b216-28abaf890739-utilities\") pod \"redhat-operators-dns7w\" (UID: \"5cab2fff-81a2-48d7-b216-28abaf890739\") " pod="openshift-marketplace/redhat-operators-dns7w" Nov 24 11:34:16 crc kubenswrapper[4789]: I1124 11:34:16.953729 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cab2fff-81a2-48d7-b216-28abaf890739-utilities\") pod \"redhat-operators-dns7w\" (UID: \"5cab2fff-81a2-48d7-b216-28abaf890739\") " pod="openshift-marketplace/redhat-operators-dns7w" Nov 24 11:34:16 crc kubenswrapper[4789]: I1124 11:34:16.953725 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cab2fff-81a2-48d7-b216-28abaf890739-catalog-content\") pod \"redhat-operators-dns7w\" (UID: \"5cab2fff-81a2-48d7-b216-28abaf890739\") " pod="openshift-marketplace/redhat-operators-dns7w" Nov 24 11:34:16 crc kubenswrapper[4789]: I1124 11:34:16.968815 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9n7c\" (UniqueName: \"kubernetes.io/projected/5cab2fff-81a2-48d7-b216-28abaf890739-kube-api-access-d9n7c\") pod \"redhat-operators-dns7w\" (UID: \"5cab2fff-81a2-48d7-b216-28abaf890739\") " pod="openshift-marketplace/redhat-operators-dns7w" Nov 24 11:34:17 crc kubenswrapper[4789]: I1124 11:34:17.188334 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dns7w" Nov 24 11:34:17 crc kubenswrapper[4789]: I1124 11:34:17.407818 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gsg89"] Nov 24 11:34:17 crc kubenswrapper[4789]: I1124 11:34:17.411211 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gsg89" Nov 24 11:34:17 crc kubenswrapper[4789]: I1124 11:34:17.413794 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 24 11:34:17 crc kubenswrapper[4789]: I1124 11:34:17.416565 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gsg89"] Nov 24 11:34:17 crc kubenswrapper[4789]: I1124 11:34:17.560873 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d070801e-b0f9-43f1-9521-c3548067d7cb-utilities\") pod \"community-operators-gsg89\" (UID: \"d070801e-b0f9-43f1-9521-c3548067d7cb\") " pod="openshift-marketplace/community-operators-gsg89" Nov 24 11:34:17 crc kubenswrapper[4789]: I1124 11:34:17.560928 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mspkk\" (UniqueName: \"kubernetes.io/projected/d070801e-b0f9-43f1-9521-c3548067d7cb-kube-api-access-mspkk\") pod \"community-operators-gsg89\" (UID: \"d070801e-b0f9-43f1-9521-c3548067d7cb\") " pod="openshift-marketplace/community-operators-gsg89" Nov 24 11:34:17 crc kubenswrapper[4789]: I1124 11:34:17.561047 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d070801e-b0f9-43f1-9521-c3548067d7cb-catalog-content\") pod \"community-operators-gsg89\" (UID: \"d070801e-b0f9-43f1-9521-c3548067d7cb\") " pod="openshift-marketplace/community-operators-gsg89" Nov 24 11:34:17 crc kubenswrapper[4789]: I1124 11:34:17.604893 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dns7w"] Nov 24 11:34:17 crc kubenswrapper[4789]: W1124 11:34:17.609664 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5cab2fff_81a2_48d7_b216_28abaf890739.slice/crio-332e6b9e3502a50c66b0469c19efdea8d810027d963e3c11ba0ce4b32347d638 WatchSource:0}: Error finding container 332e6b9e3502a50c66b0469c19efdea8d810027d963e3c11ba0ce4b32347d638: Status 404 returned error can't find the container with id 332e6b9e3502a50c66b0469c19efdea8d810027d963e3c11ba0ce4b32347d638 Nov 24 11:34:17 crc kubenswrapper[4789]: I1124 11:34:17.661667 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mspkk\" (UniqueName: \"kubernetes.io/projected/d070801e-b0f9-43f1-9521-c3548067d7cb-kube-api-access-mspkk\") pod \"community-operators-gsg89\" (UID: \"d070801e-b0f9-43f1-9521-c3548067d7cb\") " pod="openshift-marketplace/community-operators-gsg89" Nov 24 11:34:17 crc kubenswrapper[4789]: I1124 11:34:17.661735 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d070801e-b0f9-43f1-9521-c3548067d7cb-catalog-content\") pod \"community-operators-gsg89\" (UID: \"d070801e-b0f9-43f1-9521-c3548067d7cb\") " pod="openshift-marketplace/community-operators-gsg89" Nov 24 11:34:17 crc kubenswrapper[4789]: I1124 11:34:17.661780 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d070801e-b0f9-43f1-9521-c3548067d7cb-utilities\") pod \"community-operators-gsg89\" (UID: \"d070801e-b0f9-43f1-9521-c3548067d7cb\") " pod="openshift-marketplace/community-operators-gsg89" Nov 24 11:34:17 crc kubenswrapper[4789]: I1124 11:34:17.662593 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d070801e-b0f9-43f1-9521-c3548067d7cb-utilities\") pod \"community-operators-gsg89\" (UID: \"d070801e-b0f9-43f1-9521-c3548067d7cb\") " pod="openshift-marketplace/community-operators-gsg89" Nov 24 11:34:17 crc kubenswrapper[4789]: I1124 11:34:17.663438 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d070801e-b0f9-43f1-9521-c3548067d7cb-catalog-content\") pod \"community-operators-gsg89\" (UID: \"d070801e-b0f9-43f1-9521-c3548067d7cb\") " pod="openshift-marketplace/community-operators-gsg89" Nov 24 11:34:17 crc kubenswrapper[4789]: I1124 11:34:17.683567 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mspkk\" (UniqueName: \"kubernetes.io/projected/d070801e-b0f9-43f1-9521-c3548067d7cb-kube-api-access-mspkk\") pod \"community-operators-gsg89\" (UID: \"d070801e-b0f9-43f1-9521-c3548067d7cb\") " pod="openshift-marketplace/community-operators-gsg89" Nov 24 11:34:17 crc kubenswrapper[4789]: I1124 11:34:17.737753 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gsg89" Nov 24 11:34:17 crc kubenswrapper[4789]: I1124 11:34:17.833524 4789 generic.go:334] "Generic (PLEG): container finished" podID="5cab2fff-81a2-48d7-b216-28abaf890739" containerID="e9198e8513ccb91b86a364aa78002c2923006b7f782d1d1bc46e0a5290424f5d" exitCode=0 Nov 24 11:34:17 crc kubenswrapper[4789]: I1124 11:34:17.833587 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dns7w" event={"ID":"5cab2fff-81a2-48d7-b216-28abaf890739","Type":"ContainerDied","Data":"e9198e8513ccb91b86a364aa78002c2923006b7f782d1d1bc46e0a5290424f5d"} Nov 24 11:34:17 crc kubenswrapper[4789]: I1124 11:34:17.833612 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dns7w" event={"ID":"5cab2fff-81a2-48d7-b216-28abaf890739","Type":"ContainerStarted","Data":"332e6b9e3502a50c66b0469c19efdea8d810027d963e3c11ba0ce4b32347d638"} Nov 24 11:34:17 crc kubenswrapper[4789]: I1124 11:34:17.837322 4789 generic.go:334] "Generic (PLEG): container finished" podID="6b306b4d-a5ff-4c9c-b070-967f57a7e0fc" containerID="2c5378d9f0023a477ad3091096f5b770ecdc912fac94935f47f3e0ca35e28809" exitCode=0 Nov 24 11:34:17 crc kubenswrapper[4789]: I1124 11:34:17.837412 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4nkmf" event={"ID":"6b306b4d-a5ff-4c9c-b070-967f57a7e0fc","Type":"ContainerDied","Data":"2c5378d9f0023a477ad3091096f5b770ecdc912fac94935f47f3e0ca35e28809"} Nov 24 11:34:17 crc kubenswrapper[4789]: I1124 11:34:17.844850 4789 generic.go:334] "Generic (PLEG): container finished" podID="dd57300a-9489-4148-8c58-89477b5d9af4" containerID="9926af3cced966f8b2aeae423134993819c8b9c7bf71525430ace79ceaf7eae6" exitCode=0 Nov 24 11:34:17 crc kubenswrapper[4789]: I1124 11:34:17.844893 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c7xwt" event={"ID":"dd57300a-9489-4148-8c58-89477b5d9af4","Type":"ContainerDied","Data":"9926af3cced966f8b2aeae423134993819c8b9c7bf71525430ace79ceaf7eae6"} Nov 24 11:34:18 crc kubenswrapper[4789]: I1124 11:34:18.121093 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gsg89"] Nov 24 11:34:18 crc kubenswrapper[4789]: W1124 11:34:18.132836 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd070801e_b0f9_43f1_9521_c3548067d7cb.slice/crio-190b9c9d263e5582528d1207442df5016b00a83dbbf687f6491652bdf9a54099 WatchSource:0}: Error finding container 190b9c9d263e5582528d1207442df5016b00a83dbbf687f6491652bdf9a54099: Status 404 returned error can't find the container with id 190b9c9d263e5582528d1207442df5016b00a83dbbf687f6491652bdf9a54099 Nov 24 11:34:18 crc kubenswrapper[4789]: I1124 11:34:18.878817 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4nkmf" event={"ID":"6b306b4d-a5ff-4c9c-b070-967f57a7e0fc","Type":"ContainerStarted","Data":"d74e6df816b3475f58a3963b66d2f991e74d86fec811c796371c7d6fa7574830"} Nov 24 11:34:18 crc kubenswrapper[4789]: I1124 11:34:18.893131 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dns7w" event={"ID":"5cab2fff-81a2-48d7-b216-28abaf890739","Type":"ContainerStarted","Data":"f176edfa96eba44dd8fe608e6b20ee2a624bbc290f27b7de6a97da9a8abecf7f"} Nov 24 11:34:18 crc kubenswrapper[4789]: I1124 11:34:18.895315 4789 generic.go:334] "Generic (PLEG): container finished" podID="d070801e-b0f9-43f1-9521-c3548067d7cb" containerID="59d45f61724aec867a0bfd40993883ab7b70d0c2c62ee1dfb5b0471092d84d99" exitCode=0 Nov 24 11:34:18 crc kubenswrapper[4789]: I1124 11:34:18.895444 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gsg89" event={"ID":"d070801e-b0f9-43f1-9521-c3548067d7cb","Type":"ContainerDied","Data":"59d45f61724aec867a0bfd40993883ab7b70d0c2c62ee1dfb5b0471092d84d99"} Nov 24 11:34:18 crc kubenswrapper[4789]: I1124 11:34:18.895548 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gsg89" event={"ID":"d070801e-b0f9-43f1-9521-c3548067d7cb","Type":"ContainerStarted","Data":"190b9c9d263e5582528d1207442df5016b00a83dbbf687f6491652bdf9a54099"} Nov 24 11:34:18 crc kubenswrapper[4789]: I1124 11:34:18.906035 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c7xwt" event={"ID":"dd57300a-9489-4148-8c58-89477b5d9af4","Type":"ContainerStarted","Data":"65a6e95ea6bbec684c0495c12995cebe66cd267a25911f89698554f071b91b9c"} Nov 24 11:34:18 crc kubenswrapper[4789]: I1124 11:34:18.907261 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4nkmf" podStartSLOduration=2.415774228 podStartE2EDuration="4.907250937s" podCreationTimestamp="2025-11-24 11:34:14 +0000 UTC" firstStartedPulling="2025-11-24 11:34:15.820209373 +0000 UTC m=+238.402680752" lastFinishedPulling="2025-11-24 11:34:18.311686082 +0000 UTC m=+240.894157461" observedRunningTime="2025-11-24 11:34:18.898502131 +0000 UTC m=+241.480973510" watchObservedRunningTime="2025-11-24 11:34:18.907250937 +0000 UTC m=+241.489722316" Nov 24 11:34:18 crc kubenswrapper[4789]: I1124 11:34:18.970411 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c7xwt" podStartSLOduration=2.478983244 podStartE2EDuration="4.970394451s" podCreationTimestamp="2025-11-24 11:34:14 +0000 UTC" firstStartedPulling="2025-11-24 11:34:15.81819501 +0000 UTC m=+238.400666389" lastFinishedPulling="2025-11-24 11:34:18.309606197 +0000 UTC m=+240.892077596" observedRunningTime="2025-11-24 11:34:18.968695147 +0000 UTC m=+241.551166526" watchObservedRunningTime="2025-11-24 11:34:18.970394451 +0000 UTC m=+241.552865830" Nov 24 11:34:19 crc kubenswrapper[4789]: I1124 11:34:19.912552 4789 generic.go:334] "Generic (PLEG): container finished" podID="5cab2fff-81a2-48d7-b216-28abaf890739" containerID="f176edfa96eba44dd8fe608e6b20ee2a624bbc290f27b7de6a97da9a8abecf7f" exitCode=0 Nov 24 11:34:19 crc kubenswrapper[4789]: I1124 11:34:19.912615 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dns7w" event={"ID":"5cab2fff-81a2-48d7-b216-28abaf890739","Type":"ContainerDied","Data":"f176edfa96eba44dd8fe608e6b20ee2a624bbc290f27b7de6a97da9a8abecf7f"} Nov 24 11:34:20 crc kubenswrapper[4789]: I1124 11:34:20.930751 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dns7w" event={"ID":"5cab2fff-81a2-48d7-b216-28abaf890739","Type":"ContainerStarted","Data":"a9f7012171355b0ffa4c48ce30aa07d6b7021995a497fb9818e746846999a1f8"} Nov 24 11:34:20 crc kubenswrapper[4789]: I1124 11:34:20.933231 4789 generic.go:334] "Generic (PLEG): container finished" podID="d070801e-b0f9-43f1-9521-c3548067d7cb" containerID="e3ac42f981d3618e3c1e2ce6f71ba408263a77528ea323adb1decb53540bfba2" exitCode=0 Nov 24 11:34:20 crc kubenswrapper[4789]: I1124 11:34:20.933278 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gsg89" event={"ID":"d070801e-b0f9-43f1-9521-c3548067d7cb","Type":"ContainerDied","Data":"e3ac42f981d3618e3c1e2ce6f71ba408263a77528ea323adb1decb53540bfba2"} Nov 24 11:34:20 crc kubenswrapper[4789]: I1124 11:34:20.949556 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dns7w" podStartSLOduration=2.045414214 podStartE2EDuration="4.949525647s" podCreationTimestamp="2025-11-24 11:34:16 +0000 UTC" firstStartedPulling="2025-11-24 11:34:17.838272436 +0000 UTC m=+240.420743815" lastFinishedPulling="2025-11-24 11:34:20.742383869 +0000 UTC m=+243.324855248" observedRunningTime="2025-11-24 11:34:20.947516466 +0000 UTC m=+243.529987915" watchObservedRunningTime="2025-11-24 11:34:20.949525647 +0000 UTC m=+243.531997056" Nov 24 11:34:22 crc kubenswrapper[4789]: I1124 11:34:22.946132 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gsg89" event={"ID":"d070801e-b0f9-43f1-9521-c3548067d7cb","Type":"ContainerStarted","Data":"451ae45856987934fdeb4925b119ed33cc2eff217ade7b9eb5674bc94d9bddbf"} Nov 24 11:34:24 crc kubenswrapper[4789]: I1124 11:34:24.745149 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c7xwt" Nov 24 11:34:24 crc kubenswrapper[4789]: I1124 11:34:24.745483 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c7xwt" Nov 24 11:34:24 crc kubenswrapper[4789]: I1124 11:34:24.790627 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c7xwt" Nov 24 11:34:24 crc kubenswrapper[4789]: I1124 11:34:24.819834 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gsg89" podStartSLOduration=4.955507969 podStartE2EDuration="7.819816653s" podCreationTimestamp="2025-11-24 11:34:17 +0000 UTC" firstStartedPulling="2025-11-24 11:34:18.900117823 +0000 UTC m=+241.482589202" lastFinishedPulling="2025-11-24 11:34:21.764426507 +0000 UTC m=+244.346897886" observedRunningTime="2025-11-24 11:34:22.97143024 +0000 UTC m=+245.553901619" watchObservedRunningTime="2025-11-24 11:34:24.819816653 +0000 UTC m=+247.402288032" Nov 24 11:34:24 crc kubenswrapper[4789]: I1124 11:34:24.998636 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c7xwt" Nov 24 11:34:25 crc kubenswrapper[4789]: I1124 11:34:25.336905 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4nkmf" Nov 24 11:34:25 crc kubenswrapper[4789]: I1124 11:34:25.336968 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4nkmf" Nov 24 11:34:25 crc kubenswrapper[4789]: I1124 11:34:25.377583 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4nkmf" Nov 24 11:34:25 crc kubenswrapper[4789]: I1124 11:34:25.997032 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4nkmf" Nov 24 11:34:27 crc kubenswrapper[4789]: I1124 11:34:27.188751 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dns7w" Nov 24 11:34:27 crc kubenswrapper[4789]: I1124 11:34:27.188832 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dns7w" Nov 24 11:34:27 crc kubenswrapper[4789]: I1124 11:34:27.229986 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dns7w" Nov 24 11:34:27 crc kubenswrapper[4789]: I1124 11:34:27.738644 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gsg89" Nov 24 11:34:27 crc kubenswrapper[4789]: I1124 11:34:27.738688 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gsg89" Nov 24 11:34:27 crc kubenswrapper[4789]: I1124 11:34:27.808085 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gsg89" Nov 24 11:34:28 crc kubenswrapper[4789]: I1124 11:34:28.011529 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dns7w" Nov 24 11:34:28 crc kubenswrapper[4789]: I1124 11:34:28.033970 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gsg89" Nov 24 11:35:50 crc kubenswrapper[4789]: I1124 11:35:50.163011 4789 patch_prober.go:28] interesting pod/machine-config-daemon-9czvn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:35:50 crc kubenswrapper[4789]: I1124 11:35:50.165330 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:36:20 crc kubenswrapper[4789]: I1124 11:36:20.162814 4789 patch_prober.go:28] interesting pod/machine-config-daemon-9czvn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:36:20 crc kubenswrapper[4789]: I1124 11:36:20.163558 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:36:50 crc kubenswrapper[4789]: I1124 11:36:50.162319 4789 patch_prober.go:28] interesting pod/machine-config-daemon-9czvn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:36:50 crc kubenswrapper[4789]: I1124 11:36:50.163652 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:36:50 crc kubenswrapper[4789]: I1124 11:36:50.163755 4789 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" Nov 24 11:36:50 crc kubenswrapper[4789]: I1124 11:36:50.164670 4789 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d01d9f803d962ac5043375280873250a6cee3099fd94b66cca2fe0e05b74f3c0"} pod="openshift-machine-config-operator/machine-config-daemon-9czvn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 11:36:50 crc kubenswrapper[4789]: I1124 11:36:50.164798 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" containerID="cri-o://d01d9f803d962ac5043375280873250a6cee3099fd94b66cca2fe0e05b74f3c0" gracePeriod=600 Nov 24 11:36:50 crc kubenswrapper[4789]: I1124 11:36:50.877593 4789 generic.go:334] "Generic (PLEG): container finished" podID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerID="d01d9f803d962ac5043375280873250a6cee3099fd94b66cca2fe0e05b74f3c0" exitCode=0 Nov 24 11:36:50 crc kubenswrapper[4789]: I1124 11:36:50.877666 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" event={"ID":"30c4a832-f0e4-481b-a474-3ecea86049f6","Type":"ContainerDied","Data":"d01d9f803d962ac5043375280873250a6cee3099fd94b66cca2fe0e05b74f3c0"} Nov 24 11:36:50 crc kubenswrapper[4789]: I1124 11:36:50.878272 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" event={"ID":"30c4a832-f0e4-481b-a474-3ecea86049f6","Type":"ContainerStarted","Data":"64e45ebae9200df335dbfb46077262c25e90b02c6e55caf8466a7e14f278b850"} Nov 24 11:36:50 crc kubenswrapper[4789]: I1124 11:36:50.878298 4789 scope.go:117] "RemoveContainer" containerID="af7ea3ed9f8a7b96cae0a3b110df313967295ddab6f7fb0366e218101bb94250" Nov 24 11:38:17 crc kubenswrapper[4789]: I1124 11:38:17.005856 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-295tz"] Nov 24 11:38:17 crc kubenswrapper[4789]: I1124 11:38:17.006886 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-295tz" Nov 24 11:38:17 crc kubenswrapper[4789]: I1124 11:38:17.067628 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-295tz"] Nov 24 11:38:17 crc kubenswrapper[4789]: I1124 11:38:17.127429 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc65g\" (UniqueName: \"kubernetes.io/projected/318c1774-33e4-4a10-bca2-9ad18d20aa02-kube-api-access-tc65g\") pod \"image-registry-66df7c8f76-295tz\" (UID: \"318c1774-33e4-4a10-bca2-9ad18d20aa02\") " pod="openshift-image-registry/image-registry-66df7c8f76-295tz" Nov 24 11:38:17 crc kubenswrapper[4789]: I1124 11:38:17.127551 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/318c1774-33e4-4a10-bca2-9ad18d20aa02-registry-tls\") pod \"image-registry-66df7c8f76-295tz\" (UID: \"318c1774-33e4-4a10-bca2-9ad18d20aa02\") " pod="openshift-image-registry/image-registry-66df7c8f76-295tz" Nov 24 11:38:17 crc kubenswrapper[4789]: I1124 11:38:17.127588 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/318c1774-33e4-4a10-bca2-9ad18d20aa02-ca-trust-extracted\") pod \"image-registry-66df7c8f76-295tz\" (UID: \"318c1774-33e4-4a10-bca2-9ad18d20aa02\") " pod="openshift-image-registry/image-registry-66df7c8f76-295tz" Nov 24 11:38:17 crc kubenswrapper[4789]: I1124 11:38:17.127631 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/318c1774-33e4-4a10-bca2-9ad18d20aa02-registry-certificates\") pod \"image-registry-66df7c8f76-295tz\" (UID: \"318c1774-33e4-4a10-bca2-9ad18d20aa02\") " pod="openshift-image-registry/image-registry-66df7c8f76-295tz" Nov 24 11:38:17 crc kubenswrapper[4789]: I1124 11:38:17.127694 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-295tz\" (UID: \"318c1774-33e4-4a10-bca2-9ad18d20aa02\") " pod="openshift-image-registry/image-registry-66df7c8f76-295tz" Nov 24 11:38:17 crc kubenswrapper[4789]: I1124 11:38:17.127760 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/318c1774-33e4-4a10-bca2-9ad18d20aa02-installation-pull-secrets\") pod \"image-registry-66df7c8f76-295tz\" (UID: \"318c1774-33e4-4a10-bca2-9ad18d20aa02\") " pod="openshift-image-registry/image-registry-66df7c8f76-295tz" Nov 24 11:38:17 crc kubenswrapper[4789]: I1124 11:38:17.127808 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/318c1774-33e4-4a10-bca2-9ad18d20aa02-trusted-ca\") pod \"image-registry-66df7c8f76-295tz\" (UID: \"318c1774-33e4-4a10-bca2-9ad18d20aa02\") " pod="openshift-image-registry/image-registry-66df7c8f76-295tz" Nov 24 11:38:17 crc kubenswrapper[4789]: I1124 11:38:17.127857 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/318c1774-33e4-4a10-bca2-9ad18d20aa02-bound-sa-token\") pod \"image-registry-66df7c8f76-295tz\" (UID: \"318c1774-33e4-4a10-bca2-9ad18d20aa02\") " pod="openshift-image-registry/image-registry-66df7c8f76-295tz" Nov 24 11:38:17 crc kubenswrapper[4789]: I1124 11:38:17.148827 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-295tz\" (UID: \"318c1774-33e4-4a10-bca2-9ad18d20aa02\") " pod="openshift-image-registry/image-registry-66df7c8f76-295tz" Nov 24 11:38:17 crc kubenswrapper[4789]: I1124 11:38:17.228509 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/318c1774-33e4-4a10-bca2-9ad18d20aa02-ca-trust-extracted\") pod \"image-registry-66df7c8f76-295tz\" (UID: \"318c1774-33e4-4a10-bca2-9ad18d20aa02\") " pod="openshift-image-registry/image-registry-66df7c8f76-295tz" Nov 24 11:38:17 crc kubenswrapper[4789]: I1124 11:38:17.228847 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/318c1774-33e4-4a10-bca2-9ad18d20aa02-registry-certificates\") pod \"image-registry-66df7c8f76-295tz\" (UID: \"318c1774-33e4-4a10-bca2-9ad18d20aa02\") " pod="openshift-image-registry/image-registry-66df7c8f76-295tz" Nov 24 11:38:17 crc kubenswrapper[4789]: I1124 11:38:17.228914 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/318c1774-33e4-4a10-bca2-9ad18d20aa02-installation-pull-secrets\") pod \"image-registry-66df7c8f76-295tz\" (UID: \"318c1774-33e4-4a10-bca2-9ad18d20aa02\") " pod="openshift-image-registry/image-registry-66df7c8f76-295tz" Nov 24 11:38:17 crc kubenswrapper[4789]: I1124 11:38:17.228944 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/318c1774-33e4-4a10-bca2-9ad18d20aa02-trusted-ca\") pod \"image-registry-66df7c8f76-295tz\" (UID: \"318c1774-33e4-4a10-bca2-9ad18d20aa02\") " pod="openshift-image-registry/image-registry-66df7c8f76-295tz" Nov 24 11:38:17 crc kubenswrapper[4789]: I1124 11:38:17.228979 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/318c1774-33e4-4a10-bca2-9ad18d20aa02-bound-sa-token\") pod \"image-registry-66df7c8f76-295tz\" (UID: \"318c1774-33e4-4a10-bca2-9ad18d20aa02\") " pod="openshift-image-registry/image-registry-66df7c8f76-295tz" Nov 24 11:38:17 crc kubenswrapper[4789]: I1124 11:38:17.229004 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc65g\" (UniqueName: \"kubernetes.io/projected/318c1774-33e4-4a10-bca2-9ad18d20aa02-kube-api-access-tc65g\") pod \"image-registry-66df7c8f76-295tz\" (UID: \"318c1774-33e4-4a10-bca2-9ad18d20aa02\") " pod="openshift-image-registry/image-registry-66df7c8f76-295tz" Nov 24 11:38:17 crc kubenswrapper[4789]: I1124 11:38:17.229034 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/318c1774-33e4-4a10-bca2-9ad18d20aa02-registry-tls\") pod \"image-registry-66df7c8f76-295tz\" (UID: \"318c1774-33e4-4a10-bca2-9ad18d20aa02\") " pod="openshift-image-registry/image-registry-66df7c8f76-295tz" Nov 24 11:38:17 crc kubenswrapper[4789]: I1124 11:38:17.228915 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/318c1774-33e4-4a10-bca2-9ad18d20aa02-ca-trust-extracted\") pod \"image-registry-66df7c8f76-295tz\" (UID: \"318c1774-33e4-4a10-bca2-9ad18d20aa02\") " pod="openshift-image-registry/image-registry-66df7c8f76-295tz" Nov 24 11:38:17 crc kubenswrapper[4789]: I1124 11:38:17.229825 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/318c1774-33e4-4a10-bca2-9ad18d20aa02-registry-certificates\") pod \"image-registry-66df7c8f76-295tz\" (UID: \"318c1774-33e4-4a10-bca2-9ad18d20aa02\") " pod="openshift-image-registry/image-registry-66df7c8f76-295tz" Nov 24 11:38:17 crc kubenswrapper[4789]: I1124 11:38:17.230310 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/318c1774-33e4-4a10-bca2-9ad18d20aa02-trusted-ca\") pod \"image-registry-66df7c8f76-295tz\" (UID: \"318c1774-33e4-4a10-bca2-9ad18d20aa02\") " pod="openshift-image-registry/image-registry-66df7c8f76-295tz" Nov 24 11:38:17 crc kubenswrapper[4789]: I1124 11:38:17.234969 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/318c1774-33e4-4a10-bca2-9ad18d20aa02-registry-tls\") pod \"image-registry-66df7c8f76-295tz\" (UID: \"318c1774-33e4-4a10-bca2-9ad18d20aa02\") " pod="openshift-image-registry/image-registry-66df7c8f76-295tz" Nov 24 11:38:17 crc kubenswrapper[4789]: I1124 11:38:17.236935 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/318c1774-33e4-4a10-bca2-9ad18d20aa02-installation-pull-secrets\") pod \"image-registry-66df7c8f76-295tz\" (UID: \"318c1774-33e4-4a10-bca2-9ad18d20aa02\") " pod="openshift-image-registry/image-registry-66df7c8f76-295tz" Nov 24 11:38:17 crc kubenswrapper[4789]: I1124 11:38:17.243936 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/318c1774-33e4-4a10-bca2-9ad18d20aa02-bound-sa-token\") pod \"image-registry-66df7c8f76-295tz\" (UID: \"318c1774-33e4-4a10-bca2-9ad18d20aa02\") " pod="openshift-image-registry/image-registry-66df7c8f76-295tz" Nov 24 11:38:17 crc kubenswrapper[4789]: I1124 11:38:17.246037 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc65g\" (UniqueName: \"kubernetes.io/projected/318c1774-33e4-4a10-bca2-9ad18d20aa02-kube-api-access-tc65g\") pod \"image-registry-66df7c8f76-295tz\" (UID: \"318c1774-33e4-4a10-bca2-9ad18d20aa02\") " pod="openshift-image-registry/image-registry-66df7c8f76-295tz" Nov 24 11:38:17 crc kubenswrapper[4789]: I1124 11:38:17.321944 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-295tz" Nov 24 11:38:17 crc kubenswrapper[4789]: I1124 11:38:17.547077 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-295tz"] Nov 24 11:38:18 crc kubenswrapper[4789]: I1124 11:38:18.440585 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-295tz" event={"ID":"318c1774-33e4-4a10-bca2-9ad18d20aa02","Type":"ContainerStarted","Data":"4dc7457752d33fe19ebdaa4d29feaabae3f84133e98eb0d97b9e37b1d9cc2503"} Nov 24 11:38:18 crc kubenswrapper[4789]: I1124 11:38:18.441412 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-295tz" event={"ID":"318c1774-33e4-4a10-bca2-9ad18d20aa02","Type":"ContainerStarted","Data":"6b42463b7f7a882b79f72d2ed50a8a6a737dc19e30278d183afb9469dd271f3d"} Nov 24 11:38:18 crc kubenswrapper[4789]: I1124 11:38:18.441720 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-295tz" Nov 24 11:38:37 crc kubenswrapper[4789]: I1124 11:38:37.328755 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-295tz" Nov 24 11:38:37 crc kubenswrapper[4789]: I1124 11:38:37.357902 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-295tz" podStartSLOduration=21.357885984 podStartE2EDuration="21.357885984s" podCreationTimestamp="2025-11-24 11:38:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:38:18.471062637 +0000 UTC m=+481.053534046" watchObservedRunningTime="2025-11-24 11:38:37.357885984 +0000 UTC m=+499.940357363" Nov 24 11:38:37 crc kubenswrapper[4789]: I1124 11:38:37.410209 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-q52tc"] Nov 24 11:38:50 crc kubenswrapper[4789]: I1124 11:38:50.162722 4789 patch_prober.go:28] interesting pod/machine-config-daemon-9czvn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:38:50 crc kubenswrapper[4789]: I1124 11:38:50.163213 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:39:02 crc kubenswrapper[4789]: I1124 11:39:02.468647 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" podUID="51c0ab73-bbc1-4f70-afa7-059dec256973" containerName="registry" containerID="cri-o://8275e9b3d5833f89a4c7d9b219a72d5a9521452da859d64476fc7801a87e1930" gracePeriod=30 Nov 24 11:39:02 crc kubenswrapper[4789]: I1124 11:39:02.716815 4789 generic.go:334] "Generic (PLEG): container finished" podID="51c0ab73-bbc1-4f70-afa7-059dec256973" containerID="8275e9b3d5833f89a4c7d9b219a72d5a9521452da859d64476fc7801a87e1930" exitCode=0 Nov 24 11:39:02 crc kubenswrapper[4789]: I1124 11:39:02.716860 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" event={"ID":"51c0ab73-bbc1-4f70-afa7-059dec256973","Type":"ContainerDied","Data":"8275e9b3d5833f89a4c7d9b219a72d5a9521452da859d64476fc7801a87e1930"} Nov 24 11:39:02 crc kubenswrapper[4789]: I1124 11:39:02.790216 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:39:02 crc kubenswrapper[4789]: I1124 11:39:02.985075 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"51c0ab73-bbc1-4f70-afa7-059dec256973\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " Nov 24 11:39:02 crc kubenswrapper[4789]: I1124 11:39:02.985180 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/51c0ab73-bbc1-4f70-afa7-059dec256973-registry-certificates\") pod \"51c0ab73-bbc1-4f70-afa7-059dec256973\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " Nov 24 11:39:02 crc kubenswrapper[4789]: I1124 11:39:02.985207 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/51c0ab73-bbc1-4f70-afa7-059dec256973-ca-trust-extracted\") pod \"51c0ab73-bbc1-4f70-afa7-059dec256973\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " Nov 24 11:39:02 crc kubenswrapper[4789]: I1124 11:39:02.985228 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/51c0ab73-bbc1-4f70-afa7-059dec256973-trusted-ca\") pod \"51c0ab73-bbc1-4f70-afa7-059dec256973\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " Nov 24 11:39:02 crc kubenswrapper[4789]: I1124 11:39:02.985632 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/51c0ab73-bbc1-4f70-afa7-059dec256973-installation-pull-secrets\") pod \"51c0ab73-bbc1-4f70-afa7-059dec256973\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " Nov 24 11:39:02 crc kubenswrapper[4789]: I1124 11:39:02.985719 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gnc5\" (UniqueName: \"kubernetes.io/projected/51c0ab73-bbc1-4f70-afa7-059dec256973-kube-api-access-2gnc5\") pod \"51c0ab73-bbc1-4f70-afa7-059dec256973\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " Nov 24 11:39:02 crc kubenswrapper[4789]: I1124 11:39:02.986133 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51c0ab73-bbc1-4f70-afa7-059dec256973-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "51c0ab73-bbc1-4f70-afa7-059dec256973" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:39:02 crc kubenswrapper[4789]: I1124 11:39:02.986682 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/51c0ab73-bbc1-4f70-afa7-059dec256973-bound-sa-token\") pod \"51c0ab73-bbc1-4f70-afa7-059dec256973\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " Nov 24 11:39:02 crc kubenswrapper[4789]: I1124 11:39:02.987267 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/51c0ab73-bbc1-4f70-afa7-059dec256973-registry-tls\") pod \"51c0ab73-bbc1-4f70-afa7-059dec256973\" (UID: \"51c0ab73-bbc1-4f70-afa7-059dec256973\") " Nov 24 11:39:02 crc kubenswrapper[4789]: I1124 11:39:02.987572 4789 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/51c0ab73-bbc1-4f70-afa7-059dec256973-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:02 crc kubenswrapper[4789]: I1124 11:39:02.987566 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51c0ab73-bbc1-4f70-afa7-059dec256973-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "51c0ab73-bbc1-4f70-afa7-059dec256973" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:39:02 crc kubenswrapper[4789]: I1124 11:39:02.995482 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51c0ab73-bbc1-4f70-afa7-059dec256973-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "51c0ab73-bbc1-4f70-afa7-059dec256973" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:02 crc kubenswrapper[4789]: I1124 11:39:02.995658 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51c0ab73-bbc1-4f70-afa7-059dec256973-kube-api-access-2gnc5" (OuterVolumeSpecName: "kube-api-access-2gnc5") pod "51c0ab73-bbc1-4f70-afa7-059dec256973" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973"). InnerVolumeSpecName "kube-api-access-2gnc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:39:02 crc kubenswrapper[4789]: I1124 11:39:02.997480 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "51c0ab73-bbc1-4f70-afa7-059dec256973" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 24 11:39:03 crc kubenswrapper[4789]: I1124 11:39:03.001857 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51c0ab73-bbc1-4f70-afa7-059dec256973-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "51c0ab73-bbc1-4f70-afa7-059dec256973" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:39:03 crc kubenswrapper[4789]: I1124 11:39:03.002712 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51c0ab73-bbc1-4f70-afa7-059dec256973-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "51c0ab73-bbc1-4f70-afa7-059dec256973" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:39:03 crc kubenswrapper[4789]: I1124 11:39:03.019194 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51c0ab73-bbc1-4f70-afa7-059dec256973-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "51c0ab73-bbc1-4f70-afa7-059dec256973" (UID: "51c0ab73-bbc1-4f70-afa7-059dec256973"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:39:03 crc kubenswrapper[4789]: I1124 11:39:03.089206 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gnc5\" (UniqueName: \"kubernetes.io/projected/51c0ab73-bbc1-4f70-afa7-059dec256973-kube-api-access-2gnc5\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:03 crc kubenswrapper[4789]: I1124 11:39:03.089277 4789 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/51c0ab73-bbc1-4f70-afa7-059dec256973-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:03 crc kubenswrapper[4789]: I1124 11:39:03.089312 4789 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/51c0ab73-bbc1-4f70-afa7-059dec256973-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:03 crc kubenswrapper[4789]: I1124 11:39:03.089322 4789 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/51c0ab73-bbc1-4f70-afa7-059dec256973-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:03 crc kubenswrapper[4789]: I1124 11:39:03.089331 4789 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/51c0ab73-bbc1-4f70-afa7-059dec256973-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:03 crc kubenswrapper[4789]: I1124 11:39:03.089341 4789 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/51c0ab73-bbc1-4f70-afa7-059dec256973-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:03 crc kubenswrapper[4789]: I1124 11:39:03.724061 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" event={"ID":"51c0ab73-bbc1-4f70-afa7-059dec256973","Type":"ContainerDied","Data":"243e6f5c0b626f134c9e06d401ccd56dba8c206cd2f1e2887444948da6496657"} Nov 24 11:39:03 crc kubenswrapper[4789]: I1124 11:39:03.724118 4789 scope.go:117] "RemoveContainer" containerID="8275e9b3d5833f89a4c7d9b219a72d5a9521452da859d64476fc7801a87e1930" Nov 24 11:39:03 crc kubenswrapper[4789]: I1124 11:39:03.724195 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-q52tc" Nov 24 11:39:03 crc kubenswrapper[4789]: I1124 11:39:03.768349 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-q52tc"] Nov 24 11:39:03 crc kubenswrapper[4789]: I1124 11:39:03.772050 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-q52tc"] Nov 24 11:39:04 crc kubenswrapper[4789]: I1124 11:39:04.178309 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51c0ab73-bbc1-4f70-afa7-059dec256973" path="/var/lib/kubelet/pods/51c0ab73-bbc1-4f70-afa7-059dec256973/volumes" Nov 24 11:39:20 crc kubenswrapper[4789]: I1124 11:39:20.163100 4789 patch_prober.go:28] interesting pod/machine-config-daemon-9czvn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:39:20 crc kubenswrapper[4789]: I1124 11:39:20.163794 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:39:50 crc kubenswrapper[4789]: I1124 11:39:50.163122 4789 patch_prober.go:28] interesting pod/machine-config-daemon-9czvn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:39:50 crc kubenswrapper[4789]: I1124 11:39:50.164632 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:39:50 crc kubenswrapper[4789]: I1124 11:39:50.164728 4789 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" Nov 24 11:39:50 crc kubenswrapper[4789]: I1124 11:39:50.165622 4789 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"64e45ebae9200df335dbfb46077262c25e90b02c6e55caf8466a7e14f278b850"} pod="openshift-machine-config-operator/machine-config-daemon-9czvn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 11:39:50 crc kubenswrapper[4789]: I1124 11:39:50.165732 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" containerID="cri-o://64e45ebae9200df335dbfb46077262c25e90b02c6e55caf8466a7e14f278b850" gracePeriod=600 Nov 24 11:39:51 crc kubenswrapper[4789]: I1124 11:39:51.040448 4789 generic.go:334] "Generic (PLEG): container finished" podID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerID="64e45ebae9200df335dbfb46077262c25e90b02c6e55caf8466a7e14f278b850" exitCode=0 Nov 24 11:39:51 crc kubenswrapper[4789]: I1124 11:39:51.040512 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" event={"ID":"30c4a832-f0e4-481b-a474-3ecea86049f6","Type":"ContainerDied","Data":"64e45ebae9200df335dbfb46077262c25e90b02c6e55caf8466a7e14f278b850"} Nov 24 11:39:51 crc kubenswrapper[4789]: I1124 11:39:51.040924 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" event={"ID":"30c4a832-f0e4-481b-a474-3ecea86049f6","Type":"ContainerStarted","Data":"8e60897d5da5e8d43be26df5c1cea722069e382de7019ee5de88fc244959bfbd"} Nov 24 11:39:51 crc kubenswrapper[4789]: I1124 11:39:51.040947 4789 scope.go:117] "RemoveContainer" containerID="d01d9f803d962ac5043375280873250a6cee3099fd94b66cca2fe0e05b74f3c0" Nov 24 11:40:06 crc kubenswrapper[4789]: I1124 11:40:06.025045 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-g5g5j"] Nov 24 11:40:06 crc kubenswrapper[4789]: E1124 11:40:06.026080 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51c0ab73-bbc1-4f70-afa7-059dec256973" containerName="registry" Nov 24 11:40:06 crc kubenswrapper[4789]: I1124 11:40:06.026105 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="51c0ab73-bbc1-4f70-afa7-059dec256973" containerName="registry" Nov 24 11:40:06 crc kubenswrapper[4789]: I1124 11:40:06.026279 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="51c0ab73-bbc1-4f70-afa7-059dec256973" containerName="registry" Nov 24 11:40:06 crc kubenswrapper[4789]: I1124 11:40:06.026802 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-g5g5j" Nov 24 11:40:06 crc kubenswrapper[4789]: I1124 11:40:06.031593 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 24 11:40:06 crc kubenswrapper[4789]: I1124 11:40:06.031798 4789 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-nlfgg" Nov 24 11:40:06 crc kubenswrapper[4789]: I1124 11:40:06.034260 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-g5g5j"] Nov 24 11:40:06 crc kubenswrapper[4789]: I1124 11:40:06.034867 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 24 11:40:06 crc kubenswrapper[4789]: I1124 11:40:06.053887 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-m6j4q"] Nov 24 11:40:06 crc kubenswrapper[4789]: I1124 11:40:06.054700 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-m6j4q" Nov 24 11:40:06 crc kubenswrapper[4789]: I1124 11:40:06.056534 4789 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-vnnpn" Nov 24 11:40:06 crc kubenswrapper[4789]: I1124 11:40:06.056642 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-46llj"] Nov 24 11:40:06 crc kubenswrapper[4789]: I1124 11:40:06.057116 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-46llj" Nov 24 11:40:06 crc kubenswrapper[4789]: I1124 11:40:06.060037 4789 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-4bbwj" Nov 24 11:40:06 crc kubenswrapper[4789]: I1124 11:40:06.072099 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-m6j4q"] Nov 24 11:40:06 crc kubenswrapper[4789]: I1124 11:40:06.085180 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-46llj"] Nov 24 11:40:06 crc kubenswrapper[4789]: I1124 11:40:06.144680 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4rfg\" (UniqueName: \"kubernetes.io/projected/00b5b4f8-e390-4c4f-a1dc-b8c13860b689-kube-api-access-f4rfg\") pod \"cert-manager-webhook-5655c58dd6-m6j4q\" (UID: \"00b5b4f8-e390-4c4f-a1dc-b8c13860b689\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-m6j4q" Nov 24 11:40:06 crc kubenswrapper[4789]: I1124 11:40:06.144751 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ftqp\" (UniqueName: \"kubernetes.io/projected/8b96cfd6-4b27-48b1-91b5-26a6cef7c9e6-kube-api-access-9ftqp\") pod \"cert-manager-5b446d88c5-46llj\" (UID: \"8b96cfd6-4b27-48b1-91b5-26a6cef7c9e6\") " pod="cert-manager/cert-manager-5b446d88c5-46llj" Nov 24 11:40:06 crc kubenswrapper[4789]: I1124 11:40:06.144803 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84jx9\" (UniqueName: \"kubernetes.io/projected/47a486f0-4af5-4bb7-acf5-6b827e216fde-kube-api-access-84jx9\") pod \"cert-manager-cainjector-7f985d654d-g5g5j\" (UID: \"47a486f0-4af5-4bb7-acf5-6b827e216fde\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-g5g5j" Nov 24 11:40:06 crc kubenswrapper[4789]: I1124 11:40:06.246658 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4rfg\" (UniqueName: \"kubernetes.io/projected/00b5b4f8-e390-4c4f-a1dc-b8c13860b689-kube-api-access-f4rfg\") pod \"cert-manager-webhook-5655c58dd6-m6j4q\" (UID: \"00b5b4f8-e390-4c4f-a1dc-b8c13860b689\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-m6j4q" Nov 24 11:40:06 crc kubenswrapper[4789]: I1124 11:40:06.246766 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ftqp\" (UniqueName: \"kubernetes.io/projected/8b96cfd6-4b27-48b1-91b5-26a6cef7c9e6-kube-api-access-9ftqp\") pod \"cert-manager-5b446d88c5-46llj\" (UID: \"8b96cfd6-4b27-48b1-91b5-26a6cef7c9e6\") " pod="cert-manager/cert-manager-5b446d88c5-46llj" Nov 24 11:40:06 crc kubenswrapper[4789]: I1124 11:40:06.246830 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84jx9\" (UniqueName: \"kubernetes.io/projected/47a486f0-4af5-4bb7-acf5-6b827e216fde-kube-api-access-84jx9\") pod \"cert-manager-cainjector-7f985d654d-g5g5j\" (UID: \"47a486f0-4af5-4bb7-acf5-6b827e216fde\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-g5g5j" Nov 24 11:40:06 crc kubenswrapper[4789]: I1124 11:40:06.267581 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84jx9\" (UniqueName: \"kubernetes.io/projected/47a486f0-4af5-4bb7-acf5-6b827e216fde-kube-api-access-84jx9\") pod \"cert-manager-cainjector-7f985d654d-g5g5j\" (UID: \"47a486f0-4af5-4bb7-acf5-6b827e216fde\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-g5g5j" Nov 24 11:40:06 crc kubenswrapper[4789]: I1124 11:40:06.270242 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ftqp\" (UniqueName: \"kubernetes.io/projected/8b96cfd6-4b27-48b1-91b5-26a6cef7c9e6-kube-api-access-9ftqp\") pod \"cert-manager-5b446d88c5-46llj\" (UID: \"8b96cfd6-4b27-48b1-91b5-26a6cef7c9e6\") " pod="cert-manager/cert-manager-5b446d88c5-46llj" Nov 24 11:40:06 crc kubenswrapper[4789]: I1124 11:40:06.272132 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4rfg\" (UniqueName: \"kubernetes.io/projected/00b5b4f8-e390-4c4f-a1dc-b8c13860b689-kube-api-access-f4rfg\") pod \"cert-manager-webhook-5655c58dd6-m6j4q\" (UID: \"00b5b4f8-e390-4c4f-a1dc-b8c13860b689\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-m6j4q" Nov 24 11:40:06 crc kubenswrapper[4789]: I1124 11:40:06.342299 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-g5g5j" Nov 24 11:40:06 crc kubenswrapper[4789]: I1124 11:40:06.371964 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-m6j4q" Nov 24 11:40:06 crc kubenswrapper[4789]: I1124 11:40:06.379544 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-46llj" Nov 24 11:40:06 crc kubenswrapper[4789]: I1124 11:40:06.858699 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-g5g5j"] Nov 24 11:40:06 crc kubenswrapper[4789]: I1124 11:40:06.868102 4789 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 11:40:06 crc kubenswrapper[4789]: I1124 11:40:06.899751 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-m6j4q"] Nov 24 11:40:06 crc kubenswrapper[4789]: I1124 11:40:06.902840 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-46llj"] Nov 24 11:40:06 crc kubenswrapper[4789]: W1124 11:40:06.904871 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00b5b4f8_e390_4c4f_a1dc_b8c13860b689.slice/crio-8f099033095e2f91f873d5f91db85403a8cfb8ebfd40006ae559e416989ec91c WatchSource:0}: Error finding container 8f099033095e2f91f873d5f91db85403a8cfb8ebfd40006ae559e416989ec91c: Status 404 returned error can't find the container with id 8f099033095e2f91f873d5f91db85403a8cfb8ebfd40006ae559e416989ec91c Nov 24 11:40:06 crc kubenswrapper[4789]: W1124 11:40:06.909778 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b96cfd6_4b27_48b1_91b5_26a6cef7c9e6.slice/crio-b624740991be6bc2784158f663bd00e8d2e59fea205d0d76ff3eadb25f3b31a1 WatchSource:0}: Error finding container b624740991be6bc2784158f663bd00e8d2e59fea205d0d76ff3eadb25f3b31a1: Status 404 returned error can't find the container with id b624740991be6bc2784158f663bd00e8d2e59fea205d0d76ff3eadb25f3b31a1 Nov 24 11:40:07 crc kubenswrapper[4789]: I1124 11:40:07.146177 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-46llj" event={"ID":"8b96cfd6-4b27-48b1-91b5-26a6cef7c9e6","Type":"ContainerStarted","Data":"b624740991be6bc2784158f663bd00e8d2e59fea205d0d76ff3eadb25f3b31a1"} Nov 24 11:40:07 crc kubenswrapper[4789]: I1124 11:40:07.149143 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-m6j4q" event={"ID":"00b5b4f8-e390-4c4f-a1dc-b8c13860b689","Type":"ContainerStarted","Data":"8f099033095e2f91f873d5f91db85403a8cfb8ebfd40006ae559e416989ec91c"} Nov 24 11:40:07 crc kubenswrapper[4789]: I1124 11:40:07.150588 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-g5g5j" event={"ID":"47a486f0-4af5-4bb7-acf5-6b827e216fde","Type":"ContainerStarted","Data":"c47cab0dc253c49370e506e3116927e0f7ec6d18afe576020aa7302ab55ccde3"} Nov 24 11:40:11 crc kubenswrapper[4789]: I1124 11:40:11.173521 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-m6j4q" event={"ID":"00b5b4f8-e390-4c4f-a1dc-b8c13860b689","Type":"ContainerStarted","Data":"c9925a40f141f8d48ec8bb3925eec7330342124faf67edb1fd587ddec9cd66f0"} Nov 24 11:40:11 crc kubenswrapper[4789]: I1124 11:40:11.173978 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-m6j4q" Nov 24 11:40:11 crc kubenswrapper[4789]: I1124 11:40:11.174944 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-g5g5j" event={"ID":"47a486f0-4af5-4bb7-acf5-6b827e216fde","Type":"ContainerStarted","Data":"fcc2a3a62e453324106aaf0f62cb764e106856bb22c85c95515ee0f0cdc5a895"} Nov 24 11:40:11 crc kubenswrapper[4789]: I1124 11:40:11.176546 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-46llj" event={"ID":"8b96cfd6-4b27-48b1-91b5-26a6cef7c9e6","Type":"ContainerStarted","Data":"ed3105fbb6bb9105cf6ac56b7a24e7a9771152d39aad72a367aa0039179a7896"} Nov 24 11:40:11 crc kubenswrapper[4789]: I1124 11:40:11.188774 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-m6j4q" podStartSLOduration=1.777514885 podStartE2EDuration="5.188755416s" podCreationTimestamp="2025-11-24 11:40:06 +0000 UTC" firstStartedPulling="2025-11-24 11:40:06.906870819 +0000 UTC m=+589.489342198" lastFinishedPulling="2025-11-24 11:40:10.31811133 +0000 UTC m=+592.900582729" observedRunningTime="2025-11-24 11:40:11.186992341 +0000 UTC m=+593.769463760" watchObservedRunningTime="2025-11-24 11:40:11.188755416 +0000 UTC m=+593.771226795" Nov 24 11:40:11 crc kubenswrapper[4789]: I1124 11:40:11.214338 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-g5g5j" podStartSLOduration=1.740406701 podStartE2EDuration="5.214321462s" podCreationTimestamp="2025-11-24 11:40:06 +0000 UTC" firstStartedPulling="2025-11-24 11:40:06.867865927 +0000 UTC m=+589.450337306" lastFinishedPulling="2025-11-24 11:40:10.341780678 +0000 UTC m=+592.924252067" observedRunningTime="2025-11-24 11:40:11.210119414 +0000 UTC m=+593.792590783" watchObservedRunningTime="2025-11-24 11:40:11.214321462 +0000 UTC m=+593.796792841" Nov 24 11:40:11 crc kubenswrapper[4789]: I1124 11:40:11.227946 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-46llj" podStartSLOduration=1.7984027 podStartE2EDuration="5.227922341s" podCreationTimestamp="2025-11-24 11:40:06 +0000 UTC" firstStartedPulling="2025-11-24 11:40:06.912338909 +0000 UTC m=+589.494810288" lastFinishedPulling="2025-11-24 11:40:10.34185854 +0000 UTC m=+592.924329929" observedRunningTime="2025-11-24 11:40:11.22513629 +0000 UTC m=+593.807607679" watchObservedRunningTime="2025-11-24 11:40:11.227922341 +0000 UTC m=+593.810393720" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.374766 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-m6j4q" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.493993 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n4hd6"] Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.494844 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="ovn-controller" containerID="cri-o://3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4" gracePeriod=30 Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.495357 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="sbdb" containerID="cri-o://000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee" gracePeriod=30 Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.495724 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="kube-rbac-proxy-node" containerID="cri-o://34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c" gracePeriod=30 Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.495810 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="nbdb" containerID="cri-o://b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf" gracePeriod=30 Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.495872 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="northd" containerID="cri-o://1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610" gracePeriod=30 Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.495943 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="ovn-acl-logging" containerID="cri-o://6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e" gracePeriod=30 Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.495978 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82" gracePeriod=30 Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.535538 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="ovnkube-controller" containerID="cri-o://abbfbb4dd6f082a5fba6b758e7bd41053e79e50f0d7cfbca13f4d8ca6859a54c" gracePeriod=30 Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.845176 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4hd6_c6d361cd-fbb3-466d-9026-4c685922072f/ovnkube-controller/3.log" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.848624 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4hd6_c6d361cd-fbb3-466d-9026-4c685922072f/ovn-acl-logging/0.log" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.849269 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4hd6_c6d361cd-fbb3-466d-9026-4c685922072f/ovn-controller/0.log" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.849896 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.910183 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c6d361cd-fbb3-466d-9026-4c685922072f-ovnkube-script-lib\") pod \"c6d361cd-fbb3-466d-9026-4c685922072f\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.910229 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-etc-openvswitch\") pod \"c6d361cd-fbb3-466d-9026-4c685922072f\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.910265 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"c6d361cd-fbb3-466d-9026-4c685922072f\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.910290 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-host-kubelet\") pod \"c6d361cd-fbb3-466d-9026-4c685922072f\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.910312 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-host-cni-netd\") pod \"c6d361cd-fbb3-466d-9026-4c685922072f\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.910353 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-run-ovn\") pod \"c6d361cd-fbb3-466d-9026-4c685922072f\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.910373 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-systemd-units\") pod \"c6d361cd-fbb3-466d-9026-4c685922072f\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.910395 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-host-run-ovn-kubernetes\") pod \"c6d361cd-fbb3-466d-9026-4c685922072f\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.910478 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-host-run-netns\") pod \"c6d361cd-fbb3-466d-9026-4c685922072f\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.910499 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-run-systemd\") pod \"c6d361cd-fbb3-466d-9026-4c685922072f\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.910520 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-run-openvswitch\") pod \"c6d361cd-fbb3-466d-9026-4c685922072f\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.910541 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c6d361cd-fbb3-466d-9026-4c685922072f-ovnkube-config\") pod \"c6d361cd-fbb3-466d-9026-4c685922072f\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.910570 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c6d361cd-fbb3-466d-9026-4c685922072f-ovn-node-metrics-cert\") pod \"c6d361cd-fbb3-466d-9026-4c685922072f\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.910597 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-host-cni-bin\") pod \"c6d361cd-fbb3-466d-9026-4c685922072f\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.910616 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-node-log\") pod \"c6d361cd-fbb3-466d-9026-4c685922072f\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.910641 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c6d361cd-fbb3-466d-9026-4c685922072f-env-overrides\") pod \"c6d361cd-fbb3-466d-9026-4c685922072f\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.910662 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-var-lib-openvswitch\") pod \"c6d361cd-fbb3-466d-9026-4c685922072f\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.910678 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-host-slash\") pod \"c6d361cd-fbb3-466d-9026-4c685922072f\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.910715 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9f7tm\" (UniqueName: \"kubernetes.io/projected/c6d361cd-fbb3-466d-9026-4c685922072f-kube-api-access-9f7tm\") pod \"c6d361cd-fbb3-466d-9026-4c685922072f\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.910740 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-log-socket\") pod \"c6d361cd-fbb3-466d-9026-4c685922072f\" (UID: \"c6d361cd-fbb3-466d-9026-4c685922072f\") " Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.911001 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-log-socket" (OuterVolumeSpecName: "log-socket") pod "c6d361cd-fbb3-466d-9026-4c685922072f" (UID: "c6d361cd-fbb3-466d-9026-4c685922072f"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.911040 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "c6d361cd-fbb3-466d-9026-4c685922072f" (UID: "c6d361cd-fbb3-466d-9026-4c685922072f"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.911878 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6d361cd-fbb3-466d-9026-4c685922072f-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "c6d361cd-fbb3-466d-9026-4c685922072f" (UID: "c6d361cd-fbb3-466d-9026-4c685922072f"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.911918 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "c6d361cd-fbb3-466d-9026-4c685922072f" (UID: "c6d361cd-fbb3-466d-9026-4c685922072f"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.911951 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6d361cd-fbb3-466d-9026-4c685922072f-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "c6d361cd-fbb3-466d-9026-4c685922072f" (UID: "c6d361cd-fbb3-466d-9026-4c685922072f"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.911953 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-host-slash" (OuterVolumeSpecName: "host-slash") pod "c6d361cd-fbb3-466d-9026-4c685922072f" (UID: "c6d361cd-fbb3-466d-9026-4c685922072f"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.911999 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "c6d361cd-fbb3-466d-9026-4c685922072f" (UID: "c6d361cd-fbb3-466d-9026-4c685922072f"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.912004 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-node-log" (OuterVolumeSpecName: "node-log") pod "c6d361cd-fbb3-466d-9026-4c685922072f" (UID: "c6d361cd-fbb3-466d-9026-4c685922072f"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.912027 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "c6d361cd-fbb3-466d-9026-4c685922072f" (UID: "c6d361cd-fbb3-466d-9026-4c685922072f"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.911972 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "c6d361cd-fbb3-466d-9026-4c685922072f" (UID: "c6d361cd-fbb3-466d-9026-4c685922072f"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.912044 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "c6d361cd-fbb3-466d-9026-4c685922072f" (UID: "c6d361cd-fbb3-466d-9026-4c685922072f"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.912060 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "c6d361cd-fbb3-466d-9026-4c685922072f" (UID: "c6d361cd-fbb3-466d-9026-4c685922072f"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.912080 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "c6d361cd-fbb3-466d-9026-4c685922072f" (UID: "c6d361cd-fbb3-466d-9026-4c685922072f"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.912081 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "c6d361cd-fbb3-466d-9026-4c685922072f" (UID: "c6d361cd-fbb3-466d-9026-4c685922072f"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.912087 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "c6d361cd-fbb3-466d-9026-4c685922072f" (UID: "c6d361cd-fbb3-466d-9026-4c685922072f"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.912103 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "c6d361cd-fbb3-466d-9026-4c685922072f" (UID: "c6d361cd-fbb3-466d-9026-4c685922072f"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.913591 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6d361cd-fbb3-466d-9026-4c685922072f-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "c6d361cd-fbb3-466d-9026-4c685922072f" (UID: "c6d361cd-fbb3-466d-9026-4c685922072f"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.919777 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6d361cd-fbb3-466d-9026-4c685922072f-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "c6d361cd-fbb3-466d-9026-4c685922072f" (UID: "c6d361cd-fbb3-466d-9026-4c685922072f"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.920036 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6d361cd-fbb3-466d-9026-4c685922072f-kube-api-access-9f7tm" (OuterVolumeSpecName: "kube-api-access-9f7tm") pod "c6d361cd-fbb3-466d-9026-4c685922072f" (UID: "c6d361cd-fbb3-466d-9026-4c685922072f"). InnerVolumeSpecName "kube-api-access-9f7tm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.923265 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-svc2n"] Nov 24 11:40:16 crc kubenswrapper[4789]: E1124 11:40:16.923440 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="ovnkube-controller" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.923472 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="ovnkube-controller" Nov 24 11:40:16 crc kubenswrapper[4789]: E1124 11:40:16.923480 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="kube-rbac-proxy-ovn-metrics" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.923487 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="kube-rbac-proxy-ovn-metrics" Nov 24 11:40:16 crc kubenswrapper[4789]: E1124 11:40:16.923499 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="kubecfg-setup" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.923507 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="kubecfg-setup" Nov 24 11:40:16 crc kubenswrapper[4789]: E1124 11:40:16.923518 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="ovnkube-controller" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.923524 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="ovnkube-controller" Nov 24 11:40:16 crc kubenswrapper[4789]: E1124 11:40:16.923532 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="kube-rbac-proxy-node" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.923539 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="kube-rbac-proxy-node" Nov 24 11:40:16 crc kubenswrapper[4789]: E1124 11:40:16.923546 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="ovn-acl-logging" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.923552 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="ovn-acl-logging" Nov 24 11:40:16 crc kubenswrapper[4789]: E1124 11:40:16.923557 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="sbdb" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.923563 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="sbdb" Nov 24 11:40:16 crc kubenswrapper[4789]: E1124 11:40:16.923571 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="nbdb" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.923576 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="nbdb" Nov 24 11:40:16 crc kubenswrapper[4789]: E1124 11:40:16.923585 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="ovn-controller" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.923591 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="ovn-controller" Nov 24 11:40:16 crc kubenswrapper[4789]: E1124 11:40:16.923598 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="northd" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.923604 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="northd" Nov 24 11:40:16 crc kubenswrapper[4789]: E1124 11:40:16.923612 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="ovnkube-controller" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.923617 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="ovnkube-controller" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.923710 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="nbdb" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.923723 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="ovnkube-controller" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.923732 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="ovnkube-controller" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.923740 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="ovnkube-controller" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.923747 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="northd" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.923755 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="sbdb" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.923763 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="ovn-acl-logging" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.923772 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="ovn-controller" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.923779 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="kube-rbac-proxy-ovn-metrics" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.923786 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="kube-rbac-proxy-node" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.923794 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="ovnkube-controller" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.923801 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="ovnkube-controller" Nov 24 11:40:16 crc kubenswrapper[4789]: E1124 11:40:16.923883 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="ovnkube-controller" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.923890 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="ovnkube-controller" Nov 24 11:40:16 crc kubenswrapper[4789]: E1124 11:40:16.924061 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="ovnkube-controller" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.924069 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" containerName="ovnkube-controller" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.925425 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:16 crc kubenswrapper[4789]: I1124 11:40:16.932195 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "c6d361cd-fbb3-466d-9026-4c685922072f" (UID: "c6d361cd-fbb3-466d-9026-4c685922072f"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.011965 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-systemd-units\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.012026 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-host-cni-bin\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.012058 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-var-lib-openvswitch\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.012099 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-run-systemd\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.012121 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d6836a76-40cb-4b64-914f-e61390e3942d-ovnkube-config\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.012151 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-host-run-netns\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.012165 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-host-cni-netd\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.012184 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.012212 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d6836a76-40cb-4b64-914f-e61390e3942d-env-overrides\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.012246 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-run-ovn\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.012352 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-node-log\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.012428 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-host-slash\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.012495 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-run-openvswitch\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.012521 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-etc-openvswitch\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.012554 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-log-socket\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.012581 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-host-run-ovn-kubernetes\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.012614 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d6836a76-40cb-4b64-914f-e61390e3942d-ovnkube-script-lib\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.012702 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6tv2\" (UniqueName: \"kubernetes.io/projected/d6836a76-40cb-4b64-914f-e61390e3942d-kube-api-access-b6tv2\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.012767 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-host-kubelet\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.012806 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d6836a76-40cb-4b64-914f-e61390e3942d-ovn-node-metrics-cert\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.012982 4789 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c6d361cd-fbb3-466d-9026-4c685922072f-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.013006 4789 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.013026 4789 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.013044 4789 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.013056 4789 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.013071 4789 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.013081 4789 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.013095 4789 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.013108 4789 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.013118 4789 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.013129 4789 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.013139 4789 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c6d361cd-fbb3-466d-9026-4c685922072f-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.013150 4789 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c6d361cd-fbb3-466d-9026-4c685922072f-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.013161 4789 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.013171 4789 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-node-log\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.013182 4789 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c6d361cd-fbb3-466d-9026-4c685922072f-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.013192 4789 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.013203 4789 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-host-slash\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.013215 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9f7tm\" (UniqueName: \"kubernetes.io/projected/c6d361cd-fbb3-466d-9026-4c685922072f-kube-api-access-9f7tm\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.013227 4789 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c6d361cd-fbb3-466d-9026-4c685922072f-log-socket\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.113820 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-systemd-units\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.113941 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-host-cni-bin\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.113952 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-systemd-units\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.114063 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-host-cni-bin\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.114150 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-var-lib-openvswitch\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.114231 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-var-lib-openvswitch\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.114297 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-run-systemd\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.114391 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d6836a76-40cb-4b64-914f-e61390e3942d-ovnkube-config\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.114452 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-host-run-netns\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.114507 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-host-cni-netd\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.114545 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.114588 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d6836a76-40cb-4b64-914f-e61390e3942d-env-overrides\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.114623 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-run-ovn\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.114623 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-host-run-netns\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.114656 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-node-log\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.114660 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.114689 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-host-slash\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.114701 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-run-ovn\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.114738 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-host-cni-netd\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.114741 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-run-openvswitch\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.114775 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-run-openvswitch\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.114794 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-host-slash\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.114833 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-etc-openvswitch\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.114828 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-node-log\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.114870 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-run-systemd\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.114893 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-etc-openvswitch\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.114857 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-log-socket\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.114874 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-log-socket\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.114974 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-host-run-ovn-kubernetes\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.115015 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d6836a76-40cb-4b64-914f-e61390e3942d-ovnkube-script-lib\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.115053 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6tv2\" (UniqueName: \"kubernetes.io/projected/d6836a76-40cb-4b64-914f-e61390e3942d-kube-api-access-b6tv2\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.115078 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-host-run-ovn-kubernetes\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.115086 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-host-kubelet\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.115120 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d6836a76-40cb-4b64-914f-e61390e3942d-host-kubelet\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.115080 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d6836a76-40cb-4b64-914f-e61390e3942d-ovnkube-config\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.115130 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d6836a76-40cb-4b64-914f-e61390e3942d-ovn-node-metrics-cert\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.115317 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d6836a76-40cb-4b64-914f-e61390e3942d-env-overrides\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.116068 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d6836a76-40cb-4b64-914f-e61390e3942d-ovnkube-script-lib\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.119117 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d6836a76-40cb-4b64-914f-e61390e3942d-ovn-node-metrics-cert\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.139698 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6tv2\" (UniqueName: \"kubernetes.io/projected/d6836a76-40cb-4b64-914f-e61390e3942d-kube-api-access-b6tv2\") pod \"ovnkube-node-svc2n\" (UID: \"d6836a76-40cb-4b64-914f-e61390e3942d\") " pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.217555 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5fgg5_776a7cdb-6468-4e8a-8577-3535ff549781/kube-multus/2.log" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.218160 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5fgg5_776a7cdb-6468-4e8a-8577-3535ff549781/kube-multus/1.log" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.218198 4789 generic.go:334] "Generic (PLEG): container finished" podID="776a7cdb-6468-4e8a-8577-3535ff549781" containerID="203e3c34a84e87a42786ebf6949054419d8b261ddf1df1c709a9e12b3299b362" exitCode=2 Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.218260 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5fgg5" event={"ID":"776a7cdb-6468-4e8a-8577-3535ff549781","Type":"ContainerDied","Data":"203e3c34a84e87a42786ebf6949054419d8b261ddf1df1c709a9e12b3299b362"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.218293 4789 scope.go:117] "RemoveContainer" containerID="d61abcc33b471ae4b6dd594629a2287b59f66577b200848232023fa03a32aad1" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.218869 4789 scope.go:117] "RemoveContainer" containerID="203e3c34a84e87a42786ebf6949054419d8b261ddf1df1c709a9e12b3299b362" Nov 24 11:40:17 crc kubenswrapper[4789]: E1124 11:40:17.219357 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-5fgg5_openshift-multus(776a7cdb-6468-4e8a-8577-3535ff549781)\"" pod="openshift-multus/multus-5fgg5" podUID="776a7cdb-6468-4e8a-8577-3535ff549781" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.225161 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4hd6_c6d361cd-fbb3-466d-9026-4c685922072f/ovnkube-controller/3.log" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.229379 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4hd6_c6d361cd-fbb3-466d-9026-4c685922072f/ovn-acl-logging/0.log" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.230479 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n4hd6_c6d361cd-fbb3-466d-9026-4c685922072f/ovn-controller/0.log" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.230975 4789 generic.go:334] "Generic (PLEG): container finished" podID="c6d361cd-fbb3-466d-9026-4c685922072f" containerID="abbfbb4dd6f082a5fba6b758e7bd41053e79e50f0d7cfbca13f4d8ca6859a54c" exitCode=0 Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.230995 4789 generic.go:334] "Generic (PLEG): container finished" podID="c6d361cd-fbb3-466d-9026-4c685922072f" containerID="000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee" exitCode=0 Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231002 4789 generic.go:334] "Generic (PLEG): container finished" podID="c6d361cd-fbb3-466d-9026-4c685922072f" containerID="b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf" exitCode=0 Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231011 4789 generic.go:334] "Generic (PLEG): container finished" podID="c6d361cd-fbb3-466d-9026-4c685922072f" containerID="1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610" exitCode=0 Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231017 4789 generic.go:334] "Generic (PLEG): container finished" podID="c6d361cd-fbb3-466d-9026-4c685922072f" containerID="e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82" exitCode=0 Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231023 4789 generic.go:334] "Generic (PLEG): container finished" podID="c6d361cd-fbb3-466d-9026-4c685922072f" containerID="34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c" exitCode=0 Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231030 4789 generic.go:334] "Generic (PLEG): container finished" podID="c6d361cd-fbb3-466d-9026-4c685922072f" containerID="6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e" exitCode=143 Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231037 4789 generic.go:334] "Generic (PLEG): container finished" podID="c6d361cd-fbb3-466d-9026-4c685922072f" containerID="3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4" exitCode=143 Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231055 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" event={"ID":"c6d361cd-fbb3-466d-9026-4c685922072f","Type":"ContainerDied","Data":"abbfbb4dd6f082a5fba6b758e7bd41053e79e50f0d7cfbca13f4d8ca6859a54c"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231081 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" event={"ID":"c6d361cd-fbb3-466d-9026-4c685922072f","Type":"ContainerDied","Data":"000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231094 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" event={"ID":"c6d361cd-fbb3-466d-9026-4c685922072f","Type":"ContainerDied","Data":"b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231103 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" event={"ID":"c6d361cd-fbb3-466d-9026-4c685922072f","Type":"ContainerDied","Data":"1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231111 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" event={"ID":"c6d361cd-fbb3-466d-9026-4c685922072f","Type":"ContainerDied","Data":"e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231121 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" event={"ID":"c6d361cd-fbb3-466d-9026-4c685922072f","Type":"ContainerDied","Data":"34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231130 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"abbfbb4dd6f082a5fba6b758e7bd41053e79e50f0d7cfbca13f4d8ca6859a54c"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231147 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231153 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231158 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231163 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231168 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231173 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231178 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231182 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231187 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231194 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" event={"ID":"c6d361cd-fbb3-466d-9026-4c685922072f","Type":"ContainerDied","Data":"6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231201 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"abbfbb4dd6f082a5fba6b758e7bd41053e79e50f0d7cfbca13f4d8ca6859a54c"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231206 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231211 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231216 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231222 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231226 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231236 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231241 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231246 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231250 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231257 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" event={"ID":"c6d361cd-fbb3-466d-9026-4c685922072f","Type":"ContainerDied","Data":"3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231266 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"abbfbb4dd6f082a5fba6b758e7bd41053e79e50f0d7cfbca13f4d8ca6859a54c"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231271 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231276 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231281 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231286 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231291 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231297 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231301 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231306 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231311 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231317 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" event={"ID":"c6d361cd-fbb3-466d-9026-4c685922072f","Type":"ContainerDied","Data":"369714fa1e537121e09a6c7963147c6fdbb6b5e6a73a97fcbf912ba24edec73c"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231324 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"abbfbb4dd6f082a5fba6b758e7bd41053e79e50f0d7cfbca13f4d8ca6859a54c"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231330 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231335 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231339 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231344 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231349 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231354 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231359 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231364 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231369 4789 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6"} Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.231442 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n4hd6" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.252323 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.256440 4789 scope.go:117] "RemoveContainer" containerID="abbfbb4dd6f082a5fba6b758e7bd41053e79e50f0d7cfbca13f4d8ca6859a54c" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.281618 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n4hd6"] Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.287184 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n4hd6"] Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.292324 4789 scope.go:117] "RemoveContainer" containerID="ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.311221 4789 scope.go:117] "RemoveContainer" containerID="000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.324289 4789 scope.go:117] "RemoveContainer" containerID="b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.338608 4789 scope.go:117] "RemoveContainer" containerID="1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.353667 4789 scope.go:117] "RemoveContainer" containerID="e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.364524 4789 scope.go:117] "RemoveContainer" containerID="34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.376839 4789 scope.go:117] "RemoveContainer" containerID="6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.397782 4789 scope.go:117] "RemoveContainer" containerID="3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.448314 4789 scope.go:117] "RemoveContainer" containerID="84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.459686 4789 scope.go:117] "RemoveContainer" containerID="abbfbb4dd6f082a5fba6b758e7bd41053e79e50f0d7cfbca13f4d8ca6859a54c" Nov 24 11:40:17 crc kubenswrapper[4789]: E1124 11:40:17.460020 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abbfbb4dd6f082a5fba6b758e7bd41053e79e50f0d7cfbca13f4d8ca6859a54c\": container with ID starting with abbfbb4dd6f082a5fba6b758e7bd41053e79e50f0d7cfbca13f4d8ca6859a54c not found: ID does not exist" containerID="abbfbb4dd6f082a5fba6b758e7bd41053e79e50f0d7cfbca13f4d8ca6859a54c" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.460066 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abbfbb4dd6f082a5fba6b758e7bd41053e79e50f0d7cfbca13f4d8ca6859a54c"} err="failed to get container status \"abbfbb4dd6f082a5fba6b758e7bd41053e79e50f0d7cfbca13f4d8ca6859a54c\": rpc error: code = NotFound desc = could not find container \"abbfbb4dd6f082a5fba6b758e7bd41053e79e50f0d7cfbca13f4d8ca6859a54c\": container with ID starting with abbfbb4dd6f082a5fba6b758e7bd41053e79e50f0d7cfbca13f4d8ca6859a54c not found: ID does not exist" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.460087 4789 scope.go:117] "RemoveContainer" containerID="ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7" Nov 24 11:40:17 crc kubenswrapper[4789]: E1124 11:40:17.460483 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7\": container with ID starting with ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7 not found: ID does not exist" containerID="ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.460506 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7"} err="failed to get container status \"ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7\": rpc error: code = NotFound desc = could not find container \"ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7\": container with ID starting with ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7 not found: ID does not exist" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.460520 4789 scope.go:117] "RemoveContainer" containerID="000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee" Nov 24 11:40:17 crc kubenswrapper[4789]: E1124 11:40:17.460784 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee\": container with ID starting with 000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee not found: ID does not exist" containerID="000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.460819 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee"} err="failed to get container status \"000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee\": rpc error: code = NotFound desc = could not find container \"000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee\": container with ID starting with 000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee not found: ID does not exist" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.460843 4789 scope.go:117] "RemoveContainer" containerID="b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf" Nov 24 11:40:17 crc kubenswrapper[4789]: E1124 11:40:17.461147 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf\": container with ID starting with b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf not found: ID does not exist" containerID="b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.461169 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf"} err="failed to get container status \"b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf\": rpc error: code = NotFound desc = could not find container \"b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf\": container with ID starting with b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf not found: ID does not exist" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.461185 4789 scope.go:117] "RemoveContainer" containerID="1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610" Nov 24 11:40:17 crc kubenswrapper[4789]: E1124 11:40:17.461474 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610\": container with ID starting with 1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610 not found: ID does not exist" containerID="1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.461500 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610"} err="failed to get container status \"1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610\": rpc error: code = NotFound desc = could not find container \"1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610\": container with ID starting with 1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610 not found: ID does not exist" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.461513 4789 scope.go:117] "RemoveContainer" containerID="e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82" Nov 24 11:40:17 crc kubenswrapper[4789]: E1124 11:40:17.461733 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82\": container with ID starting with e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82 not found: ID does not exist" containerID="e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.461752 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82"} err="failed to get container status \"e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82\": rpc error: code = NotFound desc = could not find container \"e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82\": container with ID starting with e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82 not found: ID does not exist" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.461783 4789 scope.go:117] "RemoveContainer" containerID="34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c" Nov 24 11:40:17 crc kubenswrapper[4789]: E1124 11:40:17.461990 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c\": container with ID starting with 34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c not found: ID does not exist" containerID="34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.462007 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c"} err="failed to get container status \"34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c\": rpc error: code = NotFound desc = could not find container \"34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c\": container with ID starting with 34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c not found: ID does not exist" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.462058 4789 scope.go:117] "RemoveContainer" containerID="6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e" Nov 24 11:40:17 crc kubenswrapper[4789]: E1124 11:40:17.462274 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e\": container with ID starting with 6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e not found: ID does not exist" containerID="6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.462291 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e"} err="failed to get container status \"6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e\": rpc error: code = NotFound desc = could not find container \"6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e\": container with ID starting with 6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e not found: ID does not exist" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.462320 4789 scope.go:117] "RemoveContainer" containerID="3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4" Nov 24 11:40:17 crc kubenswrapper[4789]: E1124 11:40:17.462564 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4\": container with ID starting with 3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4 not found: ID does not exist" containerID="3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.462584 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4"} err="failed to get container status \"3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4\": rpc error: code = NotFound desc = could not find container \"3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4\": container with ID starting with 3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4 not found: ID does not exist" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.462595 4789 scope.go:117] "RemoveContainer" containerID="84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6" Nov 24 11:40:17 crc kubenswrapper[4789]: E1124 11:40:17.462809 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\": container with ID starting with 84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6 not found: ID does not exist" containerID="84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.462826 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6"} err="failed to get container status \"84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\": rpc error: code = NotFound desc = could not find container \"84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\": container with ID starting with 84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6 not found: ID does not exist" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.462855 4789 scope.go:117] "RemoveContainer" containerID="abbfbb4dd6f082a5fba6b758e7bd41053e79e50f0d7cfbca13f4d8ca6859a54c" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.463061 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abbfbb4dd6f082a5fba6b758e7bd41053e79e50f0d7cfbca13f4d8ca6859a54c"} err="failed to get container status \"abbfbb4dd6f082a5fba6b758e7bd41053e79e50f0d7cfbca13f4d8ca6859a54c\": rpc error: code = NotFound desc = could not find container \"abbfbb4dd6f082a5fba6b758e7bd41053e79e50f0d7cfbca13f4d8ca6859a54c\": container with ID starting with abbfbb4dd6f082a5fba6b758e7bd41053e79e50f0d7cfbca13f4d8ca6859a54c not found: ID does not exist" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.463080 4789 scope.go:117] "RemoveContainer" containerID="ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.463379 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7"} err="failed to get container status \"ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7\": rpc error: code = NotFound desc = could not find container \"ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7\": container with ID starting with ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7 not found: ID does not exist" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.463397 4789 scope.go:117] "RemoveContainer" containerID="000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.463704 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee"} err="failed to get container status \"000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee\": rpc error: code = NotFound desc = could not find container \"000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee\": container with ID starting with 000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee not found: ID does not exist" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.463745 4789 scope.go:117] "RemoveContainer" containerID="b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.465760 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf"} err="failed to get container status \"b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf\": rpc error: code = NotFound desc = could not find container \"b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf\": container with ID starting with b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf not found: ID does not exist" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.465780 4789 scope.go:117] "RemoveContainer" containerID="1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.466246 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610"} err="failed to get container status \"1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610\": rpc error: code = NotFound desc = could not find container \"1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610\": container with ID starting with 1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610 not found: ID does not exist" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.466269 4789 scope.go:117] "RemoveContainer" containerID="e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.466504 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82"} err="failed to get container status \"e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82\": rpc error: code = NotFound desc = could not find container \"e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82\": container with ID starting with e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82 not found: ID does not exist" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.466533 4789 scope.go:117] "RemoveContainer" containerID="34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.466898 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c"} err="failed to get container status \"34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c\": rpc error: code = NotFound desc = could not find container \"34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c\": container with ID starting with 34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c not found: ID does not exist" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.466919 4789 scope.go:117] "RemoveContainer" containerID="6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.467177 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e"} err="failed to get container status \"6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e\": rpc error: code = NotFound desc = could not find container \"6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e\": container with ID starting with 6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e not found: ID does not exist" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.467199 4789 scope.go:117] "RemoveContainer" containerID="3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.467397 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4"} err="failed to get container status \"3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4\": rpc error: code = NotFound desc = could not find container \"3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4\": container with ID starting with 3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4 not found: ID does not exist" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.467417 4789 scope.go:117] "RemoveContainer" containerID="84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.467637 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6"} err="failed to get container status \"84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\": rpc error: code = NotFound desc = could not find container \"84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\": container with ID starting with 84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6 not found: ID does not exist" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.467654 4789 scope.go:117] "RemoveContainer" containerID="abbfbb4dd6f082a5fba6b758e7bd41053e79e50f0d7cfbca13f4d8ca6859a54c" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.467839 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abbfbb4dd6f082a5fba6b758e7bd41053e79e50f0d7cfbca13f4d8ca6859a54c"} err="failed to get container status \"abbfbb4dd6f082a5fba6b758e7bd41053e79e50f0d7cfbca13f4d8ca6859a54c\": rpc error: code = NotFound desc = could not find container \"abbfbb4dd6f082a5fba6b758e7bd41053e79e50f0d7cfbca13f4d8ca6859a54c\": container with ID starting with abbfbb4dd6f082a5fba6b758e7bd41053e79e50f0d7cfbca13f4d8ca6859a54c not found: ID does not exist" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.467857 4789 scope.go:117] "RemoveContainer" containerID="ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.468054 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7"} err="failed to get container status \"ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7\": rpc error: code = NotFound desc = could not find container \"ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7\": container with ID starting with ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7 not found: ID does not exist" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.468070 4789 scope.go:117] "RemoveContainer" containerID="000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.468275 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee"} err="failed to get container status \"000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee\": rpc error: code = NotFound desc = could not find container \"000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee\": container with ID starting with 000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee not found: ID does not exist" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.468290 4789 scope.go:117] "RemoveContainer" containerID="b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.468470 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf"} err="failed to get container status \"b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf\": rpc error: code = NotFound desc = could not find container \"b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf\": container with ID starting with b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf not found: ID does not exist" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.468488 4789 scope.go:117] "RemoveContainer" containerID="1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.468755 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610"} err="failed to get container status \"1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610\": rpc error: code = NotFound desc = could not find container \"1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610\": container with ID starting with 1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610 not found: ID does not exist" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.468776 4789 scope.go:117] "RemoveContainer" containerID="e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.469165 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82"} err="failed to get container status \"e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82\": rpc error: code = NotFound desc = could not find container \"e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82\": container with ID starting with e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82 not found: ID does not exist" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.469187 4789 scope.go:117] "RemoveContainer" containerID="34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.469557 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c"} err="failed to get container status \"34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c\": rpc error: code = NotFound desc = could not find container \"34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c\": container with ID starting with 34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c not found: ID does not exist" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.469577 4789 scope.go:117] "RemoveContainer" containerID="6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.469781 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e"} err="failed to get container status \"6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e\": rpc error: code = NotFound desc = could not find container \"6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e\": container with ID starting with 6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e not found: ID does not exist" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.469802 4789 scope.go:117] "RemoveContainer" containerID="3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.470102 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4"} err="failed to get container status \"3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4\": rpc error: code = NotFound desc = could not find container \"3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4\": container with ID starting with 3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4 not found: ID does not exist" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.470124 4789 scope.go:117] "RemoveContainer" containerID="84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.472320 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6"} err="failed to get container status \"84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\": rpc error: code = NotFound desc = could not find container \"84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\": container with ID starting with 84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6 not found: ID does not exist" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.472363 4789 scope.go:117] "RemoveContainer" containerID="abbfbb4dd6f082a5fba6b758e7bd41053e79e50f0d7cfbca13f4d8ca6859a54c" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.472771 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abbfbb4dd6f082a5fba6b758e7bd41053e79e50f0d7cfbca13f4d8ca6859a54c"} err="failed to get container status \"abbfbb4dd6f082a5fba6b758e7bd41053e79e50f0d7cfbca13f4d8ca6859a54c\": rpc error: code = NotFound desc = could not find container \"abbfbb4dd6f082a5fba6b758e7bd41053e79e50f0d7cfbca13f4d8ca6859a54c\": container with ID starting with abbfbb4dd6f082a5fba6b758e7bd41053e79e50f0d7cfbca13f4d8ca6859a54c not found: ID does not exist" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.472787 4789 scope.go:117] "RemoveContainer" containerID="ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.473020 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7"} err="failed to get container status \"ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7\": rpc error: code = NotFound desc = could not find container \"ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7\": container with ID starting with ed21fc0ba5eacac2e1d9700ac4207fca8de4239f61e3b9d17e18d22bb8c85de7 not found: ID does not exist" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.473039 4789 scope.go:117] "RemoveContainer" containerID="000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.473792 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee"} err="failed to get container status \"000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee\": rpc error: code = NotFound desc = could not find container \"000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee\": container with ID starting with 000fce00bed7a40421238e1b7d7f3be0382aaa6d87bfec0b79d3c16320a69cee not found: ID does not exist" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.473841 4789 scope.go:117] "RemoveContainer" containerID="b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.475906 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf"} err="failed to get container status \"b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf\": rpc error: code = NotFound desc = could not find container \"b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf\": container with ID starting with b7b00dc312cb620a8da5c492ab32c80aa086d93dfb1abfa3d1977b1c21b453cf not found: ID does not exist" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.475972 4789 scope.go:117] "RemoveContainer" containerID="1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.476579 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610"} err="failed to get container status \"1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610\": rpc error: code = NotFound desc = could not find container \"1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610\": container with ID starting with 1752bb44b6dba2513f89f0bd127f5461f643ef054ef4a426a617a2b5ab3a7610 not found: ID does not exist" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.476604 4789 scope.go:117] "RemoveContainer" containerID="e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.477086 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82"} err="failed to get container status \"e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82\": rpc error: code = NotFound desc = could not find container \"e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82\": container with ID starting with e23e9fd75e219733a8e42dd00df7138b6f79aa4cf7f6ccf77c854b7f65a06d82 not found: ID does not exist" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.477111 4789 scope.go:117] "RemoveContainer" containerID="34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.483632 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c"} err="failed to get container status \"34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c\": rpc error: code = NotFound desc = could not find container \"34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c\": container with ID starting with 34ff3f3bd6ddc43bf0c905f88747b949cf701823eca2d577ced53ebb4d0bf35c not found: ID does not exist" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.483846 4789 scope.go:117] "RemoveContainer" containerID="6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.484437 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e"} err="failed to get container status \"6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e\": rpc error: code = NotFound desc = could not find container \"6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e\": container with ID starting with 6d3e65a57b24dea616bec584c5e3f765428effdfff9090dcbafa671c0ca6549e not found: ID does not exist" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.484594 4789 scope.go:117] "RemoveContainer" containerID="3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.485404 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4"} err="failed to get container status \"3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4\": rpc error: code = NotFound desc = could not find container \"3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4\": container with ID starting with 3c3fa2eedc84a18397b7956188ef3e50ded762486c7daba636f645ed69a5baa4 not found: ID does not exist" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.485557 4789 scope.go:117] "RemoveContainer" containerID="84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6" Nov 24 11:40:17 crc kubenswrapper[4789]: I1124 11:40:17.485867 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6"} err="failed to get container status \"84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\": rpc error: code = NotFound desc = could not find container \"84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6\": container with ID starting with 84cf9ce831ceb2a1b2f103c802eedd6d196f673e6aac9f1b019c4ddf95414da6 not found: ID does not exist" Nov 24 11:40:18 crc kubenswrapper[4789]: I1124 11:40:18.183030 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6d361cd-fbb3-466d-9026-4c685922072f" path="/var/lib/kubelet/pods/c6d361cd-fbb3-466d-9026-4c685922072f/volumes" Nov 24 11:40:18 crc kubenswrapper[4789]: I1124 11:40:18.238398 4789 generic.go:334] "Generic (PLEG): container finished" podID="d6836a76-40cb-4b64-914f-e61390e3942d" containerID="d7fdeee0879d7d66b3cb67fb95b00207cf5ff84b95c2546340c2b300bc7ad617" exitCode=0 Nov 24 11:40:18 crc kubenswrapper[4789]: I1124 11:40:18.238504 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" event={"ID":"d6836a76-40cb-4b64-914f-e61390e3942d","Type":"ContainerDied","Data":"d7fdeee0879d7d66b3cb67fb95b00207cf5ff84b95c2546340c2b300bc7ad617"} Nov 24 11:40:18 crc kubenswrapper[4789]: I1124 11:40:18.238554 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" event={"ID":"d6836a76-40cb-4b64-914f-e61390e3942d","Type":"ContainerStarted","Data":"c9cf616fe2ae1c3ca2ddedc9f9eaed1a121cb839c8f98bc26337d8d98468e18a"} Nov 24 11:40:18 crc kubenswrapper[4789]: I1124 11:40:18.240449 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5fgg5_776a7cdb-6468-4e8a-8577-3535ff549781/kube-multus/2.log" Nov 24 11:40:19 crc kubenswrapper[4789]: I1124 11:40:19.251212 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" event={"ID":"d6836a76-40cb-4b64-914f-e61390e3942d","Type":"ContainerStarted","Data":"b8e68828e5dceeba56d89d0c2b59673b84c9e560f71b962fcca01dc28d499b20"} Nov 24 11:40:19 crc kubenswrapper[4789]: I1124 11:40:19.252075 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" event={"ID":"d6836a76-40cb-4b64-914f-e61390e3942d","Type":"ContainerStarted","Data":"411b38a00a906a754fe9484c6c0612720932639a9d1964be2d527f6bc1a20bcd"} Nov 24 11:40:19 crc kubenswrapper[4789]: I1124 11:40:19.252109 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" event={"ID":"d6836a76-40cb-4b64-914f-e61390e3942d","Type":"ContainerStarted","Data":"e0f3dadf31be1ca8c091caa51331c373a20b5609f6efa48fb7d09d22796263a1"} Nov 24 11:40:19 crc kubenswrapper[4789]: I1124 11:40:19.252136 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" event={"ID":"d6836a76-40cb-4b64-914f-e61390e3942d","Type":"ContainerStarted","Data":"4f5a1ba74bae29b5e174bcc7cee7b983a52f5385d3a4fd242096014f82c0a5b2"} Nov 24 11:40:19 crc kubenswrapper[4789]: I1124 11:40:19.252161 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" event={"ID":"d6836a76-40cb-4b64-914f-e61390e3942d","Type":"ContainerStarted","Data":"c04790b8ac6b562b13449fdd12168c3d04fe31af25a6f81941da8a499d5431f5"} Nov 24 11:40:19 crc kubenswrapper[4789]: I1124 11:40:19.252188 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" event={"ID":"d6836a76-40cb-4b64-914f-e61390e3942d","Type":"ContainerStarted","Data":"4025988b3aa2d7f27af0fc96670c1e15b6423764d0602ceb5d926a19ed7d06f8"} Nov 24 11:40:21 crc kubenswrapper[4789]: I1124 11:40:21.266375 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" event={"ID":"d6836a76-40cb-4b64-914f-e61390e3942d","Type":"ContainerStarted","Data":"ba4cc78b2aa97235dd507d18f060964a541a98fa4b69fe4debaaff419cf8f31a"} Nov 24 11:40:24 crc kubenswrapper[4789]: I1124 11:40:24.286336 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" event={"ID":"d6836a76-40cb-4b64-914f-e61390e3942d","Type":"ContainerStarted","Data":"c9f069139dd1c06430ebc7e5058307c70b4ca02307a252f90c69047971244ed1"} Nov 24 11:40:24 crc kubenswrapper[4789]: I1124 11:40:24.286725 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:24 crc kubenswrapper[4789]: I1124 11:40:24.286827 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:24 crc kubenswrapper[4789]: I1124 11:40:24.286904 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:24 crc kubenswrapper[4789]: I1124 11:40:24.314521 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:24 crc kubenswrapper[4789]: I1124 11:40:24.314581 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:24 crc kubenswrapper[4789]: I1124 11:40:24.349284 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" podStartSLOduration=8.349263964 podStartE2EDuration="8.349263964s" podCreationTimestamp="2025-11-24 11:40:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:40:24.318720971 +0000 UTC m=+606.901192360" watchObservedRunningTime="2025-11-24 11:40:24.349263964 +0000 UTC m=+606.931735353" Nov 24 11:40:28 crc kubenswrapper[4789]: I1124 11:40:28.172559 4789 scope.go:117] "RemoveContainer" containerID="203e3c34a84e87a42786ebf6949054419d8b261ddf1df1c709a9e12b3299b362" Nov 24 11:40:28 crc kubenswrapper[4789]: E1124 11:40:28.173400 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-5fgg5_openshift-multus(776a7cdb-6468-4e8a-8577-3535ff549781)\"" pod="openshift-multus/multus-5fgg5" podUID="776a7cdb-6468-4e8a-8577-3535ff549781" Nov 24 11:40:41 crc kubenswrapper[4789]: I1124 11:40:41.169584 4789 scope.go:117] "RemoveContainer" containerID="203e3c34a84e87a42786ebf6949054419d8b261ddf1df1c709a9e12b3299b362" Nov 24 11:40:41 crc kubenswrapper[4789]: I1124 11:40:41.398334 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5fgg5_776a7cdb-6468-4e8a-8577-3535ff549781/kube-multus/2.log" Nov 24 11:40:41 crc kubenswrapper[4789]: I1124 11:40:41.398701 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5fgg5" event={"ID":"776a7cdb-6468-4e8a-8577-3535ff549781","Type":"ContainerStarted","Data":"f9b2ef5cd5cadaf30bd3b2a0e5d4b1eef899bf5c916f96e9dfbcba5bf322f1c8"} Nov 24 11:40:47 crc kubenswrapper[4789]: I1124 11:40:47.276091 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-svc2n" Nov 24 11:40:57 crc kubenswrapper[4789]: I1124 11:40:57.975483 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772enr8tx"] Nov 24 11:40:57 crc kubenswrapper[4789]: I1124 11:40:57.977060 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772enr8tx" Nov 24 11:40:57 crc kubenswrapper[4789]: I1124 11:40:57.979328 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 24 11:40:57 crc kubenswrapper[4789]: I1124 11:40:57.988413 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772enr8tx"] Nov 24 11:40:58 crc kubenswrapper[4789]: I1124 11:40:58.053668 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f6471629-48a8-49da-be9a-ad77354e63b1-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772enr8tx\" (UID: \"f6471629-48a8-49da-be9a-ad77354e63b1\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772enr8tx" Nov 24 11:40:58 crc kubenswrapper[4789]: I1124 11:40:58.053713 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f6471629-48a8-49da-be9a-ad77354e63b1-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772enr8tx\" (UID: \"f6471629-48a8-49da-be9a-ad77354e63b1\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772enr8tx" Nov 24 11:40:58 crc kubenswrapper[4789]: I1124 11:40:58.053746 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pw47\" (UniqueName: \"kubernetes.io/projected/f6471629-48a8-49da-be9a-ad77354e63b1-kube-api-access-8pw47\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772enr8tx\" (UID: \"f6471629-48a8-49da-be9a-ad77354e63b1\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772enr8tx" Nov 24 11:40:58 crc kubenswrapper[4789]: I1124 11:40:58.154993 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f6471629-48a8-49da-be9a-ad77354e63b1-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772enr8tx\" (UID: \"f6471629-48a8-49da-be9a-ad77354e63b1\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772enr8tx" Nov 24 11:40:58 crc kubenswrapper[4789]: I1124 11:40:58.155047 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f6471629-48a8-49da-be9a-ad77354e63b1-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772enr8tx\" (UID: \"f6471629-48a8-49da-be9a-ad77354e63b1\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772enr8tx" Nov 24 11:40:58 crc kubenswrapper[4789]: I1124 11:40:58.155083 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pw47\" (UniqueName: \"kubernetes.io/projected/f6471629-48a8-49da-be9a-ad77354e63b1-kube-api-access-8pw47\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772enr8tx\" (UID: \"f6471629-48a8-49da-be9a-ad77354e63b1\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772enr8tx" Nov 24 11:40:58 crc kubenswrapper[4789]: I1124 11:40:58.155481 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f6471629-48a8-49da-be9a-ad77354e63b1-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772enr8tx\" (UID: \"f6471629-48a8-49da-be9a-ad77354e63b1\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772enr8tx" Nov 24 11:40:58 crc kubenswrapper[4789]: I1124 11:40:58.155634 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f6471629-48a8-49da-be9a-ad77354e63b1-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772enr8tx\" (UID: \"f6471629-48a8-49da-be9a-ad77354e63b1\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772enr8tx" Nov 24 11:40:58 crc kubenswrapper[4789]: I1124 11:40:58.190395 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pw47\" (UniqueName: \"kubernetes.io/projected/f6471629-48a8-49da-be9a-ad77354e63b1-kube-api-access-8pw47\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772enr8tx\" (UID: \"f6471629-48a8-49da-be9a-ad77354e63b1\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772enr8tx" Nov 24 11:40:58 crc kubenswrapper[4789]: I1124 11:40:58.293322 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772enr8tx" Nov 24 11:40:58 crc kubenswrapper[4789]: I1124 11:40:58.526275 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772enr8tx"] Nov 24 11:40:59 crc kubenswrapper[4789]: I1124 11:40:59.496666 4789 generic.go:334] "Generic (PLEG): container finished" podID="f6471629-48a8-49da-be9a-ad77354e63b1" containerID="658588ccca25bc9246dfd88d05d8a198fa256220f93cb12c2063f506407f6ecd" exitCode=0 Nov 24 11:40:59 crc kubenswrapper[4789]: I1124 11:40:59.496889 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772enr8tx" event={"ID":"f6471629-48a8-49da-be9a-ad77354e63b1","Type":"ContainerDied","Data":"658588ccca25bc9246dfd88d05d8a198fa256220f93cb12c2063f506407f6ecd"} Nov 24 11:40:59 crc kubenswrapper[4789]: I1124 11:40:59.497085 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772enr8tx" event={"ID":"f6471629-48a8-49da-be9a-ad77354e63b1","Type":"ContainerStarted","Data":"6684f802073a6bbd714071e8c32fc791360a795395fe68e5586a091740b03452"} Nov 24 11:41:01 crc kubenswrapper[4789]: I1124 11:41:01.508335 4789 generic.go:334] "Generic (PLEG): container finished" podID="f6471629-48a8-49da-be9a-ad77354e63b1" containerID="6ffcdc48f745164db69753471947ba0c63cbc59728879e0e5f709b2f2bd68d71" exitCode=0 Nov 24 11:41:01 crc kubenswrapper[4789]: I1124 11:41:01.508399 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772enr8tx" event={"ID":"f6471629-48a8-49da-be9a-ad77354e63b1","Type":"ContainerDied","Data":"6ffcdc48f745164db69753471947ba0c63cbc59728879e0e5f709b2f2bd68d71"} Nov 24 11:41:02 crc kubenswrapper[4789]: I1124 11:41:02.518242 4789 generic.go:334] "Generic (PLEG): container finished" podID="f6471629-48a8-49da-be9a-ad77354e63b1" containerID="27dd9c35bca26b054275fb96fc4334e3fcea370c4950c0238d150e6aebf00d49" exitCode=0 Nov 24 11:41:02 crc kubenswrapper[4789]: I1124 11:41:02.518324 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772enr8tx" event={"ID":"f6471629-48a8-49da-be9a-ad77354e63b1","Type":"ContainerDied","Data":"27dd9c35bca26b054275fb96fc4334e3fcea370c4950c0238d150e6aebf00d49"} Nov 24 11:41:03 crc kubenswrapper[4789]: I1124 11:41:03.770987 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772enr8tx" Nov 24 11:41:03 crc kubenswrapper[4789]: I1124 11:41:03.830607 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f6471629-48a8-49da-be9a-ad77354e63b1-bundle\") pod \"f6471629-48a8-49da-be9a-ad77354e63b1\" (UID: \"f6471629-48a8-49da-be9a-ad77354e63b1\") " Nov 24 11:41:03 crc kubenswrapper[4789]: I1124 11:41:03.830667 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f6471629-48a8-49da-be9a-ad77354e63b1-util\") pod \"f6471629-48a8-49da-be9a-ad77354e63b1\" (UID: \"f6471629-48a8-49da-be9a-ad77354e63b1\") " Nov 24 11:41:03 crc kubenswrapper[4789]: I1124 11:41:03.830716 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pw47\" (UniqueName: \"kubernetes.io/projected/f6471629-48a8-49da-be9a-ad77354e63b1-kube-api-access-8pw47\") pod \"f6471629-48a8-49da-be9a-ad77354e63b1\" (UID: \"f6471629-48a8-49da-be9a-ad77354e63b1\") " Nov 24 11:41:03 crc kubenswrapper[4789]: I1124 11:41:03.831197 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6471629-48a8-49da-be9a-ad77354e63b1-bundle" (OuterVolumeSpecName: "bundle") pod "f6471629-48a8-49da-be9a-ad77354e63b1" (UID: "f6471629-48a8-49da-be9a-ad77354e63b1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:41:03 crc kubenswrapper[4789]: I1124 11:41:03.838408 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6471629-48a8-49da-be9a-ad77354e63b1-kube-api-access-8pw47" (OuterVolumeSpecName: "kube-api-access-8pw47") pod "f6471629-48a8-49da-be9a-ad77354e63b1" (UID: "f6471629-48a8-49da-be9a-ad77354e63b1"). InnerVolumeSpecName "kube-api-access-8pw47". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:41:03 crc kubenswrapper[4789]: I1124 11:41:03.844489 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6471629-48a8-49da-be9a-ad77354e63b1-util" (OuterVolumeSpecName: "util") pod "f6471629-48a8-49da-be9a-ad77354e63b1" (UID: "f6471629-48a8-49da-be9a-ad77354e63b1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:41:03 crc kubenswrapper[4789]: I1124 11:41:03.931782 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pw47\" (UniqueName: \"kubernetes.io/projected/f6471629-48a8-49da-be9a-ad77354e63b1-kube-api-access-8pw47\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:03 crc kubenswrapper[4789]: I1124 11:41:03.931824 4789 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f6471629-48a8-49da-be9a-ad77354e63b1-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:03 crc kubenswrapper[4789]: I1124 11:41:03.931839 4789 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f6471629-48a8-49da-be9a-ad77354e63b1-util\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:04 crc kubenswrapper[4789]: I1124 11:41:04.532288 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772enr8tx" event={"ID":"f6471629-48a8-49da-be9a-ad77354e63b1","Type":"ContainerDied","Data":"6684f802073a6bbd714071e8c32fc791360a795395fe68e5586a091740b03452"} Nov 24 11:41:04 crc kubenswrapper[4789]: I1124 11:41:04.532378 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772enr8tx" Nov 24 11:41:04 crc kubenswrapper[4789]: I1124 11:41:04.532390 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6684f802073a6bbd714071e8c32fc791360a795395fe68e5586a091740b03452" Nov 24 11:41:09 crc kubenswrapper[4789]: I1124 11:41:09.543618 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-pf9g5"] Nov 24 11:41:09 crc kubenswrapper[4789]: E1124 11:41:09.544193 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6471629-48a8-49da-be9a-ad77354e63b1" containerName="extract" Nov 24 11:41:09 crc kubenswrapper[4789]: I1124 11:41:09.544206 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6471629-48a8-49da-be9a-ad77354e63b1" containerName="extract" Nov 24 11:41:09 crc kubenswrapper[4789]: E1124 11:41:09.544217 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6471629-48a8-49da-be9a-ad77354e63b1" containerName="pull" Nov 24 11:41:09 crc kubenswrapper[4789]: I1124 11:41:09.544223 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6471629-48a8-49da-be9a-ad77354e63b1" containerName="pull" Nov 24 11:41:09 crc kubenswrapper[4789]: E1124 11:41:09.544232 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6471629-48a8-49da-be9a-ad77354e63b1" containerName="util" Nov 24 11:41:09 crc kubenswrapper[4789]: I1124 11:41:09.544238 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6471629-48a8-49da-be9a-ad77354e63b1" containerName="util" Nov 24 11:41:09 crc kubenswrapper[4789]: I1124 11:41:09.544334 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6471629-48a8-49da-be9a-ad77354e63b1" containerName="extract" Nov 24 11:41:09 crc kubenswrapper[4789]: I1124 11:41:09.544713 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-pf9g5" Nov 24 11:41:09 crc kubenswrapper[4789]: I1124 11:41:09.550688 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-dwmbh" Nov 24 11:41:09 crc kubenswrapper[4789]: I1124 11:41:09.550951 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 24 11:41:09 crc kubenswrapper[4789]: I1124 11:41:09.551754 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 24 11:41:09 crc kubenswrapper[4789]: I1124 11:41:09.566063 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-pf9g5"] Nov 24 11:41:09 crc kubenswrapper[4789]: I1124 11:41:09.616405 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qzvm\" (UniqueName: \"kubernetes.io/projected/f1714436-b482-4a7a-9ea2-7ef512ac500c-kube-api-access-8qzvm\") pod \"nmstate-operator-557fdffb88-pf9g5\" (UID: \"f1714436-b482-4a7a-9ea2-7ef512ac500c\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-pf9g5" Nov 24 11:41:09 crc kubenswrapper[4789]: I1124 11:41:09.718271 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qzvm\" (UniqueName: \"kubernetes.io/projected/f1714436-b482-4a7a-9ea2-7ef512ac500c-kube-api-access-8qzvm\") pod \"nmstate-operator-557fdffb88-pf9g5\" (UID: \"f1714436-b482-4a7a-9ea2-7ef512ac500c\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-pf9g5" Nov 24 11:41:09 crc kubenswrapper[4789]: I1124 11:41:09.736256 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qzvm\" (UniqueName: \"kubernetes.io/projected/f1714436-b482-4a7a-9ea2-7ef512ac500c-kube-api-access-8qzvm\") pod \"nmstate-operator-557fdffb88-pf9g5\" (UID: \"f1714436-b482-4a7a-9ea2-7ef512ac500c\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-pf9g5" Nov 24 11:41:09 crc kubenswrapper[4789]: I1124 11:41:09.862214 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-pf9g5" Nov 24 11:41:10 crc kubenswrapper[4789]: I1124 11:41:10.244425 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-pf9g5"] Nov 24 11:41:10 crc kubenswrapper[4789]: I1124 11:41:10.570956 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-pf9g5" event={"ID":"f1714436-b482-4a7a-9ea2-7ef512ac500c","Type":"ContainerStarted","Data":"36c7e25916d94775556e333fafbfd904bafc4d63a78b790478fcdbaf492eefec"} Nov 24 11:41:12 crc kubenswrapper[4789]: I1124 11:41:12.592806 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-pf9g5" event={"ID":"f1714436-b482-4a7a-9ea2-7ef512ac500c","Type":"ContainerStarted","Data":"c3a1e0528c8d66675bef8f84e72d94af185cb01129200876c3fd9d8468c7f253"} Nov 24 11:41:12 crc kubenswrapper[4789]: I1124 11:41:12.616421 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-557fdffb88-pf9g5" podStartSLOduration=1.639524199 podStartE2EDuration="3.616398201s" podCreationTimestamp="2025-11-24 11:41:09 +0000 UTC" firstStartedPulling="2025-11-24 11:41:10.259053513 +0000 UTC m=+652.841524892" lastFinishedPulling="2025-11-24 11:41:12.235927515 +0000 UTC m=+654.818398894" observedRunningTime="2025-11-24 11:41:12.608192047 +0000 UTC m=+655.190663436" watchObservedRunningTime="2025-11-24 11:41:12.616398201 +0000 UTC m=+655.198869580" Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.547845 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-cr456"] Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.549307 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-cr456" Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.550508 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-cwrhs" Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.565075 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-cr456"] Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.576945 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-lgcsz"] Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.577679 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-lgcsz" Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.581075 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.590700 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-lgcsz"] Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.601220 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-tc6cw"] Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.601890 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-tc6cw" Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.649036 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/b8e5c0f4-380c-43d6-be7e-335586100004-nmstate-lock\") pod \"nmstate-handler-tc6cw\" (UID: \"b8e5c0f4-380c-43d6-be7e-335586100004\") " pod="openshift-nmstate/nmstate-handler-tc6cw" Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.649377 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s97l\" (UniqueName: \"kubernetes.io/projected/7912ee90-6561-4ccb-be26-e14a7b5d4215-kube-api-access-8s97l\") pod \"nmstate-metrics-5dcf9c57c5-cr456\" (UID: \"7912ee90-6561-4ccb-be26-e14a7b5d4215\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-cr456" Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.649411 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dxtc\" (UniqueName: \"kubernetes.io/projected/37289b7f-66b0-4c52-98d7-2bbd918a4f4d-kube-api-access-2dxtc\") pod \"nmstate-webhook-6b89b748d8-lgcsz\" (UID: \"37289b7f-66b0-4c52-98d7-2bbd918a4f4d\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-lgcsz" Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.649432 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/b8e5c0f4-380c-43d6-be7e-335586100004-ovs-socket\") pod \"nmstate-handler-tc6cw\" (UID: \"b8e5c0f4-380c-43d6-be7e-335586100004\") " pod="openshift-nmstate/nmstate-handler-tc6cw" Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.649468 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b697x\" (UniqueName: \"kubernetes.io/projected/b8e5c0f4-380c-43d6-be7e-335586100004-kube-api-access-b697x\") pod \"nmstate-handler-tc6cw\" (UID: \"b8e5c0f4-380c-43d6-be7e-335586100004\") " pod="openshift-nmstate/nmstate-handler-tc6cw" Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.649500 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/b8e5c0f4-380c-43d6-be7e-335586100004-dbus-socket\") pod \"nmstate-handler-tc6cw\" (UID: \"b8e5c0f4-380c-43d6-be7e-335586100004\") " pod="openshift-nmstate/nmstate-handler-tc6cw" Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.649523 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/37289b7f-66b0-4c52-98d7-2bbd918a4f4d-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-lgcsz\" (UID: \"37289b7f-66b0-4c52-98d7-2bbd918a4f4d\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-lgcsz" Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.750278 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/b8e5c0f4-380c-43d6-be7e-335586100004-nmstate-lock\") pod \"nmstate-handler-tc6cw\" (UID: \"b8e5c0f4-380c-43d6-be7e-335586100004\") " pod="openshift-nmstate/nmstate-handler-tc6cw" Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.750327 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s97l\" (UniqueName: \"kubernetes.io/projected/7912ee90-6561-4ccb-be26-e14a7b5d4215-kube-api-access-8s97l\") pod \"nmstate-metrics-5dcf9c57c5-cr456\" (UID: \"7912ee90-6561-4ccb-be26-e14a7b5d4215\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-cr456" Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.750361 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dxtc\" (UniqueName: \"kubernetes.io/projected/37289b7f-66b0-4c52-98d7-2bbd918a4f4d-kube-api-access-2dxtc\") pod \"nmstate-webhook-6b89b748d8-lgcsz\" (UID: \"37289b7f-66b0-4c52-98d7-2bbd918a4f4d\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-lgcsz" Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.750387 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/b8e5c0f4-380c-43d6-be7e-335586100004-ovs-socket\") pod \"nmstate-handler-tc6cw\" (UID: \"b8e5c0f4-380c-43d6-be7e-335586100004\") " pod="openshift-nmstate/nmstate-handler-tc6cw" Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.750408 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b697x\" (UniqueName: \"kubernetes.io/projected/b8e5c0f4-380c-43d6-be7e-335586100004-kube-api-access-b697x\") pod \"nmstate-handler-tc6cw\" (UID: \"b8e5c0f4-380c-43d6-be7e-335586100004\") " pod="openshift-nmstate/nmstate-handler-tc6cw" Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.750414 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/b8e5c0f4-380c-43d6-be7e-335586100004-nmstate-lock\") pod \"nmstate-handler-tc6cw\" (UID: \"b8e5c0f4-380c-43d6-be7e-335586100004\") " pod="openshift-nmstate/nmstate-handler-tc6cw" Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.750427 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/b8e5c0f4-380c-43d6-be7e-335586100004-dbus-socket\") pod \"nmstate-handler-tc6cw\" (UID: \"b8e5c0f4-380c-43d6-be7e-335586100004\") " pod="openshift-nmstate/nmstate-handler-tc6cw" Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.750535 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/37289b7f-66b0-4c52-98d7-2bbd918a4f4d-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-lgcsz\" (UID: \"37289b7f-66b0-4c52-98d7-2bbd918a4f4d\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-lgcsz" Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.750651 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/b8e5c0f4-380c-43d6-be7e-335586100004-ovs-socket\") pod \"nmstate-handler-tc6cw\" (UID: \"b8e5c0f4-380c-43d6-be7e-335586100004\") " pod="openshift-nmstate/nmstate-handler-tc6cw" Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.750688 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/b8e5c0f4-380c-43d6-be7e-335586100004-dbus-socket\") pod \"nmstate-handler-tc6cw\" (UID: \"b8e5c0f4-380c-43d6-be7e-335586100004\") " pod="openshift-nmstate/nmstate-handler-tc6cw" Nov 24 11:41:18 crc kubenswrapper[4789]: E1124 11:41:18.750831 4789 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Nov 24 11:41:18 crc kubenswrapper[4789]: E1124 11:41:18.750887 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37289b7f-66b0-4c52-98d7-2bbd918a4f4d-tls-key-pair podName:37289b7f-66b0-4c52-98d7-2bbd918a4f4d nodeName:}" failed. No retries permitted until 2025-11-24 11:41:19.250872826 +0000 UTC m=+661.833344215 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/37289b7f-66b0-4c52-98d7-2bbd918a4f4d-tls-key-pair") pod "nmstate-webhook-6b89b748d8-lgcsz" (UID: "37289b7f-66b0-4c52-98d7-2bbd918a4f4d") : secret "openshift-nmstate-webhook" not found Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.757884 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-znx64"] Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.758511 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-znx64" Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.760397 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-rlvg6" Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.760868 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.761274 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.778482 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-znx64"] Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.781444 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b697x\" (UniqueName: \"kubernetes.io/projected/b8e5c0f4-380c-43d6-be7e-335586100004-kube-api-access-b697x\") pod \"nmstate-handler-tc6cw\" (UID: \"b8e5c0f4-380c-43d6-be7e-335586100004\") " pod="openshift-nmstate/nmstate-handler-tc6cw" Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.789519 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s97l\" (UniqueName: \"kubernetes.io/projected/7912ee90-6561-4ccb-be26-e14a7b5d4215-kube-api-access-8s97l\") pod \"nmstate-metrics-5dcf9c57c5-cr456\" (UID: \"7912ee90-6561-4ccb-be26-e14a7b5d4215\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-cr456" Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.799137 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dxtc\" (UniqueName: \"kubernetes.io/projected/37289b7f-66b0-4c52-98d7-2bbd918a4f4d-kube-api-access-2dxtc\") pod \"nmstate-webhook-6b89b748d8-lgcsz\" (UID: \"37289b7f-66b0-4c52-98d7-2bbd918a4f4d\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-lgcsz" Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.851399 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e20c522e-8987-4a5b-84a4-c40098d2e179-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-znx64\" (UID: \"e20c522e-8987-4a5b-84a4-c40098d2e179\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-znx64" Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.851484 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmbm5\" (UniqueName: \"kubernetes.io/projected/e20c522e-8987-4a5b-84a4-c40098d2e179-kube-api-access-jmbm5\") pod \"nmstate-console-plugin-5874bd7bc5-znx64\" (UID: \"e20c522e-8987-4a5b-84a4-c40098d2e179\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-znx64" Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.851507 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e20c522e-8987-4a5b-84a4-c40098d2e179-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-znx64\" (UID: \"e20c522e-8987-4a5b-84a4-c40098d2e179\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-znx64" Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.864813 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-cr456" Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.925882 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-tc6cw" Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.952977 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e20c522e-8987-4a5b-84a4-c40098d2e179-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-znx64\" (UID: \"e20c522e-8987-4a5b-84a4-c40098d2e179\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-znx64" Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.953038 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmbm5\" (UniqueName: \"kubernetes.io/projected/e20c522e-8987-4a5b-84a4-c40098d2e179-kube-api-access-jmbm5\") pod \"nmstate-console-plugin-5874bd7bc5-znx64\" (UID: \"e20c522e-8987-4a5b-84a4-c40098d2e179\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-znx64" Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.953060 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e20c522e-8987-4a5b-84a4-c40098d2e179-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-znx64\" (UID: \"e20c522e-8987-4a5b-84a4-c40098d2e179\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-znx64" Nov 24 11:41:18 crc kubenswrapper[4789]: E1124 11:41:18.953189 4789 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Nov 24 11:41:18 crc kubenswrapper[4789]: E1124 11:41:18.953233 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e20c522e-8987-4a5b-84a4-c40098d2e179-plugin-serving-cert podName:e20c522e-8987-4a5b-84a4-c40098d2e179 nodeName:}" failed. No retries permitted until 2025-11-24 11:41:19.453218125 +0000 UTC m=+662.035689504 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/e20c522e-8987-4a5b-84a4-c40098d2e179-plugin-serving-cert") pod "nmstate-console-plugin-5874bd7bc5-znx64" (UID: "e20c522e-8987-4a5b-84a4-c40098d2e179") : secret "plugin-serving-cert" not found Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.954244 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e20c522e-8987-4a5b-84a4-c40098d2e179-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-znx64\" (UID: \"e20c522e-8987-4a5b-84a4-c40098d2e179\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-znx64" Nov 24 11:41:18 crc kubenswrapper[4789]: W1124 11:41:18.957287 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8e5c0f4_380c_43d6_be7e_335586100004.slice/crio-0ac5ce0fb112b2b4bb7ea4b507b1268a610b0ed8c1dbd6ac8c3e23241ea7ad63 WatchSource:0}: Error finding container 0ac5ce0fb112b2b4bb7ea4b507b1268a610b0ed8c1dbd6ac8c3e23241ea7ad63: Status 404 returned error can't find the container with id 0ac5ce0fb112b2b4bb7ea4b507b1268a610b0ed8c1dbd6ac8c3e23241ea7ad63 Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.965553 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-69b4887dfd-kqws5"] Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.966154 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69b4887dfd-kqws5" Nov 24 11:41:18 crc kubenswrapper[4789]: I1124 11:41:18.982183 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmbm5\" (UniqueName: \"kubernetes.io/projected/e20c522e-8987-4a5b-84a4-c40098d2e179-kube-api-access-jmbm5\") pod \"nmstate-console-plugin-5874bd7bc5-znx64\" (UID: \"e20c522e-8987-4a5b-84a4-c40098d2e179\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-znx64" Nov 24 11:41:19 crc kubenswrapper[4789]: I1124 11:41:19.016598 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-69b4887dfd-kqws5"] Nov 24 11:41:19 crc kubenswrapper[4789]: I1124 11:41:19.054153 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/264d5190-c359-403e-9f09-cadd6d0fda47-trusted-ca-bundle\") pod \"console-69b4887dfd-kqws5\" (UID: \"264d5190-c359-403e-9f09-cadd6d0fda47\") " pod="openshift-console/console-69b4887dfd-kqws5" Nov 24 11:41:19 crc kubenswrapper[4789]: I1124 11:41:19.054196 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf5rf\" (UniqueName: \"kubernetes.io/projected/264d5190-c359-403e-9f09-cadd6d0fda47-kube-api-access-pf5rf\") pod \"console-69b4887dfd-kqws5\" (UID: \"264d5190-c359-403e-9f09-cadd6d0fda47\") " pod="openshift-console/console-69b4887dfd-kqws5" Nov 24 11:41:19 crc kubenswrapper[4789]: I1124 11:41:19.054221 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/264d5190-c359-403e-9f09-cadd6d0fda47-console-oauth-config\") pod \"console-69b4887dfd-kqws5\" (UID: \"264d5190-c359-403e-9f09-cadd6d0fda47\") " pod="openshift-console/console-69b4887dfd-kqws5" Nov 24 11:41:19 crc kubenswrapper[4789]: I1124 11:41:19.054246 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/264d5190-c359-403e-9f09-cadd6d0fda47-oauth-serving-cert\") pod \"console-69b4887dfd-kqws5\" (UID: \"264d5190-c359-403e-9f09-cadd6d0fda47\") " pod="openshift-console/console-69b4887dfd-kqws5" Nov 24 11:41:19 crc kubenswrapper[4789]: I1124 11:41:19.054271 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/264d5190-c359-403e-9f09-cadd6d0fda47-service-ca\") pod \"console-69b4887dfd-kqws5\" (UID: \"264d5190-c359-403e-9f09-cadd6d0fda47\") " pod="openshift-console/console-69b4887dfd-kqws5" Nov 24 11:41:19 crc kubenswrapper[4789]: I1124 11:41:19.054314 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/264d5190-c359-403e-9f09-cadd6d0fda47-console-serving-cert\") pod \"console-69b4887dfd-kqws5\" (UID: \"264d5190-c359-403e-9f09-cadd6d0fda47\") " pod="openshift-console/console-69b4887dfd-kqws5" Nov 24 11:41:19 crc kubenswrapper[4789]: I1124 11:41:19.054364 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/264d5190-c359-403e-9f09-cadd6d0fda47-console-config\") pod \"console-69b4887dfd-kqws5\" (UID: \"264d5190-c359-403e-9f09-cadd6d0fda47\") " pod="openshift-console/console-69b4887dfd-kqws5" Nov 24 11:41:19 crc kubenswrapper[4789]: I1124 11:41:19.155896 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/264d5190-c359-403e-9f09-cadd6d0fda47-console-oauth-config\") pod \"console-69b4887dfd-kqws5\" (UID: \"264d5190-c359-403e-9f09-cadd6d0fda47\") " pod="openshift-console/console-69b4887dfd-kqws5" Nov 24 11:41:19 crc kubenswrapper[4789]: I1124 11:41:19.155931 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/264d5190-c359-403e-9f09-cadd6d0fda47-oauth-serving-cert\") pod \"console-69b4887dfd-kqws5\" (UID: \"264d5190-c359-403e-9f09-cadd6d0fda47\") " pod="openshift-console/console-69b4887dfd-kqws5" Nov 24 11:41:19 crc kubenswrapper[4789]: I1124 11:41:19.155956 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/264d5190-c359-403e-9f09-cadd6d0fda47-service-ca\") pod \"console-69b4887dfd-kqws5\" (UID: \"264d5190-c359-403e-9f09-cadd6d0fda47\") " pod="openshift-console/console-69b4887dfd-kqws5" Nov 24 11:41:19 crc kubenswrapper[4789]: I1124 11:41:19.155975 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/264d5190-c359-403e-9f09-cadd6d0fda47-console-serving-cert\") pod \"console-69b4887dfd-kqws5\" (UID: \"264d5190-c359-403e-9f09-cadd6d0fda47\") " pod="openshift-console/console-69b4887dfd-kqws5" Nov 24 11:41:19 crc kubenswrapper[4789]: I1124 11:41:19.156026 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/264d5190-c359-403e-9f09-cadd6d0fda47-console-config\") pod \"console-69b4887dfd-kqws5\" (UID: \"264d5190-c359-403e-9f09-cadd6d0fda47\") " pod="openshift-console/console-69b4887dfd-kqws5" Nov 24 11:41:19 crc kubenswrapper[4789]: I1124 11:41:19.156061 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/264d5190-c359-403e-9f09-cadd6d0fda47-trusted-ca-bundle\") pod \"console-69b4887dfd-kqws5\" (UID: \"264d5190-c359-403e-9f09-cadd6d0fda47\") " pod="openshift-console/console-69b4887dfd-kqws5" Nov 24 11:41:19 crc kubenswrapper[4789]: I1124 11:41:19.156497 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf5rf\" (UniqueName: \"kubernetes.io/projected/264d5190-c359-403e-9f09-cadd6d0fda47-kube-api-access-pf5rf\") pod \"console-69b4887dfd-kqws5\" (UID: \"264d5190-c359-403e-9f09-cadd6d0fda47\") " pod="openshift-console/console-69b4887dfd-kqws5" Nov 24 11:41:19 crc kubenswrapper[4789]: I1124 11:41:19.156877 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/264d5190-c359-403e-9f09-cadd6d0fda47-oauth-serving-cert\") pod \"console-69b4887dfd-kqws5\" (UID: \"264d5190-c359-403e-9f09-cadd6d0fda47\") " pod="openshift-console/console-69b4887dfd-kqws5" Nov 24 11:41:19 crc kubenswrapper[4789]: I1124 11:41:19.156918 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/264d5190-c359-403e-9f09-cadd6d0fda47-service-ca\") pod \"console-69b4887dfd-kqws5\" (UID: \"264d5190-c359-403e-9f09-cadd6d0fda47\") " pod="openshift-console/console-69b4887dfd-kqws5" Nov 24 11:41:19 crc kubenswrapper[4789]: I1124 11:41:19.156934 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/264d5190-c359-403e-9f09-cadd6d0fda47-console-config\") pod \"console-69b4887dfd-kqws5\" (UID: \"264d5190-c359-403e-9f09-cadd6d0fda47\") " pod="openshift-console/console-69b4887dfd-kqws5" Nov 24 11:41:19 crc kubenswrapper[4789]: I1124 11:41:19.157800 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/264d5190-c359-403e-9f09-cadd6d0fda47-trusted-ca-bundle\") pod \"console-69b4887dfd-kqws5\" (UID: \"264d5190-c359-403e-9f09-cadd6d0fda47\") " pod="openshift-console/console-69b4887dfd-kqws5" Nov 24 11:41:19 crc kubenswrapper[4789]: I1124 11:41:19.160006 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/264d5190-c359-403e-9f09-cadd6d0fda47-console-oauth-config\") pod \"console-69b4887dfd-kqws5\" (UID: \"264d5190-c359-403e-9f09-cadd6d0fda47\") " pod="openshift-console/console-69b4887dfd-kqws5" Nov 24 11:41:19 crc kubenswrapper[4789]: I1124 11:41:19.161147 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/264d5190-c359-403e-9f09-cadd6d0fda47-console-serving-cert\") pod \"console-69b4887dfd-kqws5\" (UID: \"264d5190-c359-403e-9f09-cadd6d0fda47\") " pod="openshift-console/console-69b4887dfd-kqws5" Nov 24 11:41:19 crc kubenswrapper[4789]: I1124 11:41:19.172412 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf5rf\" (UniqueName: \"kubernetes.io/projected/264d5190-c359-403e-9f09-cadd6d0fda47-kube-api-access-pf5rf\") pod \"console-69b4887dfd-kqws5\" (UID: \"264d5190-c359-403e-9f09-cadd6d0fda47\") " pod="openshift-console/console-69b4887dfd-kqws5" Nov 24 11:41:19 crc kubenswrapper[4789]: I1124 11:41:19.258179 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/37289b7f-66b0-4c52-98d7-2bbd918a4f4d-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-lgcsz\" (UID: \"37289b7f-66b0-4c52-98d7-2bbd918a4f4d\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-lgcsz" Nov 24 11:41:19 crc kubenswrapper[4789]: I1124 11:41:19.261829 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/37289b7f-66b0-4c52-98d7-2bbd918a4f4d-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-lgcsz\" (UID: \"37289b7f-66b0-4c52-98d7-2bbd918a4f4d\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-lgcsz" Nov 24 11:41:19 crc kubenswrapper[4789]: I1124 11:41:19.291922 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69b4887dfd-kqws5" Nov 24 11:41:19 crc kubenswrapper[4789]: I1124 11:41:19.313429 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-cr456"] Nov 24 11:41:19 crc kubenswrapper[4789]: W1124 11:41:19.321623 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7912ee90_6561_4ccb_be26_e14a7b5d4215.slice/crio-2d792ceb7af32024961cc137d827959d77c46615630aa7b5608cc938c2712ade WatchSource:0}: Error finding container 2d792ceb7af32024961cc137d827959d77c46615630aa7b5608cc938c2712ade: Status 404 returned error can't find the container with id 2d792ceb7af32024961cc137d827959d77c46615630aa7b5608cc938c2712ade Nov 24 11:41:19 crc kubenswrapper[4789]: I1124 11:41:19.466841 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e20c522e-8987-4a5b-84a4-c40098d2e179-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-znx64\" (UID: \"e20c522e-8987-4a5b-84a4-c40098d2e179\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-znx64" Nov 24 11:41:19 crc kubenswrapper[4789]: I1124 11:41:19.488837 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e20c522e-8987-4a5b-84a4-c40098d2e179-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-znx64\" (UID: \"e20c522e-8987-4a5b-84a4-c40098d2e179\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-znx64" Nov 24 11:41:19 crc kubenswrapper[4789]: I1124 11:41:19.490200 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-lgcsz" Nov 24 11:41:19 crc kubenswrapper[4789]: I1124 11:41:19.628688 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-tc6cw" event={"ID":"b8e5c0f4-380c-43d6-be7e-335586100004","Type":"ContainerStarted","Data":"0ac5ce0fb112b2b4bb7ea4b507b1268a610b0ed8c1dbd6ac8c3e23241ea7ad63"} Nov 24 11:41:19 crc kubenswrapper[4789]: I1124 11:41:19.631122 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-cr456" event={"ID":"7912ee90-6561-4ccb-be26-e14a7b5d4215","Type":"ContainerStarted","Data":"2d792ceb7af32024961cc137d827959d77c46615630aa7b5608cc938c2712ade"} Nov 24 11:41:19 crc kubenswrapper[4789]: I1124 11:41:19.672081 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-znx64" Nov 24 11:41:19 crc kubenswrapper[4789]: I1124 11:41:19.766931 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-69b4887dfd-kqws5"] Nov 24 11:41:19 crc kubenswrapper[4789]: W1124 11:41:19.774062 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod264d5190_c359_403e_9f09_cadd6d0fda47.slice/crio-62c1d7e1e29a93111a1a0754815d7ad94bd3dffe5594ecd4827ebf540089002b WatchSource:0}: Error finding container 62c1d7e1e29a93111a1a0754815d7ad94bd3dffe5594ecd4827ebf540089002b: Status 404 returned error can't find the container with id 62c1d7e1e29a93111a1a0754815d7ad94bd3dffe5594ecd4827ebf540089002b Nov 24 11:41:19 crc kubenswrapper[4789]: I1124 11:41:19.956443 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-lgcsz"] Nov 24 11:41:19 crc kubenswrapper[4789]: W1124 11:41:19.966765 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37289b7f_66b0_4c52_98d7_2bbd918a4f4d.slice/crio-3ae0a9b0337cf80ec6d50d1f76c7302eacf56855314f0521efa4e21e7351899f WatchSource:0}: Error finding container 3ae0a9b0337cf80ec6d50d1f76c7302eacf56855314f0521efa4e21e7351899f: Status 404 returned error can't find the container with id 3ae0a9b0337cf80ec6d50d1f76c7302eacf56855314f0521efa4e21e7351899f Nov 24 11:41:20 crc kubenswrapper[4789]: I1124 11:41:20.078040 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-znx64"] Nov 24 11:41:20 crc kubenswrapper[4789]: W1124 11:41:20.085219 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode20c522e_8987_4a5b_84a4_c40098d2e179.slice/crio-3f644f810da4e71d597567f55575bf7efd457aa96919413c8fd84fe9d70fc448 WatchSource:0}: Error finding container 3f644f810da4e71d597567f55575bf7efd457aa96919413c8fd84fe9d70fc448: Status 404 returned error can't find the container with id 3f644f810da4e71d597567f55575bf7efd457aa96919413c8fd84fe9d70fc448 Nov 24 11:41:20 crc kubenswrapper[4789]: I1124 11:41:20.636517 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69b4887dfd-kqws5" event={"ID":"264d5190-c359-403e-9f09-cadd6d0fda47","Type":"ContainerStarted","Data":"29d36fac3d295d526d2e143c00849ee9d35ac9031d6dd0a1e8a4fa85c3e9220b"} Nov 24 11:41:20 crc kubenswrapper[4789]: I1124 11:41:20.636560 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69b4887dfd-kqws5" event={"ID":"264d5190-c359-403e-9f09-cadd6d0fda47","Type":"ContainerStarted","Data":"62c1d7e1e29a93111a1a0754815d7ad94bd3dffe5594ecd4827ebf540089002b"} Nov 24 11:41:20 crc kubenswrapper[4789]: I1124 11:41:20.637954 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-znx64" event={"ID":"e20c522e-8987-4a5b-84a4-c40098d2e179","Type":"ContainerStarted","Data":"3f644f810da4e71d597567f55575bf7efd457aa96919413c8fd84fe9d70fc448"} Nov 24 11:41:20 crc kubenswrapper[4789]: I1124 11:41:20.638713 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-lgcsz" event={"ID":"37289b7f-66b0-4c52-98d7-2bbd918a4f4d","Type":"ContainerStarted","Data":"3ae0a9b0337cf80ec6d50d1f76c7302eacf56855314f0521efa4e21e7351899f"} Nov 24 11:41:22 crc kubenswrapper[4789]: I1124 11:41:22.655087 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-tc6cw" event={"ID":"b8e5c0f4-380c-43d6-be7e-335586100004","Type":"ContainerStarted","Data":"0bc6426db2ccef47eff41090e2419686c09bb7b21f3f744e3817dd21b3d6f061"} Nov 24 11:41:22 crc kubenswrapper[4789]: I1124 11:41:22.655528 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-tc6cw" Nov 24 11:41:22 crc kubenswrapper[4789]: I1124 11:41:22.657055 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-lgcsz" event={"ID":"37289b7f-66b0-4c52-98d7-2bbd918a4f4d","Type":"ContainerStarted","Data":"fbe5b7fecd46909808a0af47f7c12ff596e3326d2bccc1391511425a798d627d"} Nov 24 11:41:22 crc kubenswrapper[4789]: I1124 11:41:22.657592 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-lgcsz" Nov 24 11:41:22 crc kubenswrapper[4789]: I1124 11:41:22.659150 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-cr456" event={"ID":"7912ee90-6561-4ccb-be26-e14a7b5d4215","Type":"ContainerStarted","Data":"aa6850d8bf57c781113e2d4730f49753a98748bbd7bae9ca962bf7d8d492b3e4"} Nov 24 11:41:22 crc kubenswrapper[4789]: I1124 11:41:22.672155 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-69b4887dfd-kqws5" podStartSLOduration=4.67213917 podStartE2EDuration="4.67213917s" podCreationTimestamp="2025-11-24 11:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:41:20.653320954 +0000 UTC m=+663.235792343" watchObservedRunningTime="2025-11-24 11:41:22.67213917 +0000 UTC m=+665.254610559" Nov 24 11:41:22 crc kubenswrapper[4789]: I1124 11:41:22.691632 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-tc6cw" podStartSLOduration=2.075316645 podStartE2EDuration="4.691610255s" podCreationTimestamp="2025-11-24 11:41:18 +0000 UTC" firstStartedPulling="2025-11-24 11:41:18.970905306 +0000 UTC m=+661.553376685" lastFinishedPulling="2025-11-24 11:41:21.587198916 +0000 UTC m=+664.169670295" observedRunningTime="2025-11-24 11:41:22.671381772 +0000 UTC m=+665.253853151" watchObservedRunningTime="2025-11-24 11:41:22.691610255 +0000 UTC m=+665.274081634" Nov 24 11:41:22 crc kubenswrapper[4789]: I1124 11:41:22.691956 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-lgcsz" podStartSLOduration=3.058475436 podStartE2EDuration="4.691931074s" podCreationTimestamp="2025-11-24 11:41:18 +0000 UTC" firstStartedPulling="2025-11-24 11:41:19.969535041 +0000 UTC m=+662.552006440" lastFinishedPulling="2025-11-24 11:41:21.602990699 +0000 UTC m=+664.185462078" observedRunningTime="2025-11-24 11:41:22.686292763 +0000 UTC m=+665.268764152" watchObservedRunningTime="2025-11-24 11:41:22.691931074 +0000 UTC m=+665.274402453" Nov 24 11:41:23 crc kubenswrapper[4789]: I1124 11:41:23.666652 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-znx64" event={"ID":"e20c522e-8987-4a5b-84a4-c40098d2e179","Type":"ContainerStarted","Data":"bda0b5984adf19e05a65d7a22526ab0f568ffe4cd8763659a6a663040c4767e4"} Nov 24 11:41:24 crc kubenswrapper[4789]: I1124 11:41:24.674851 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-cr456" event={"ID":"7912ee90-6561-4ccb-be26-e14a7b5d4215","Type":"ContainerStarted","Data":"cef8a93e9b50c6675ea38ec8b2d67ad6327eeba1661e8ccfe3ea5eae710addbd"} Nov 24 11:41:24 crc kubenswrapper[4789]: I1124 11:41:24.701699 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-cr456" podStartSLOduration=2.334979863 podStartE2EDuration="6.701680024s" podCreationTimestamp="2025-11-24 11:41:18 +0000 UTC" firstStartedPulling="2025-11-24 11:41:19.324090673 +0000 UTC m=+661.906562052" lastFinishedPulling="2025-11-24 11:41:23.690790834 +0000 UTC m=+666.273262213" observedRunningTime="2025-11-24 11:41:24.69711832 +0000 UTC m=+667.279589719" watchObservedRunningTime="2025-11-24 11:41:24.701680024 +0000 UTC m=+667.284151413" Nov 24 11:41:24 crc kubenswrapper[4789]: I1124 11:41:24.705321 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-znx64" podStartSLOduration=4.032259481 podStartE2EDuration="6.705306274s" podCreationTimestamp="2025-11-24 11:41:18 +0000 UTC" firstStartedPulling="2025-11-24 11:41:20.087321125 +0000 UTC m=+662.669792504" lastFinishedPulling="2025-11-24 11:41:22.760367918 +0000 UTC m=+665.342839297" observedRunningTime="2025-11-24 11:41:23.679681077 +0000 UTC m=+666.262152456" watchObservedRunningTime="2025-11-24 11:41:24.705306274 +0000 UTC m=+667.287777653" Nov 24 11:41:28 crc kubenswrapper[4789]: I1124 11:41:28.962452 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-tc6cw" Nov 24 11:41:29 crc kubenswrapper[4789]: I1124 11:41:29.292525 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-69b4887dfd-kqws5" Nov 24 11:41:29 crc kubenswrapper[4789]: I1124 11:41:29.292576 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-69b4887dfd-kqws5" Nov 24 11:41:29 crc kubenswrapper[4789]: I1124 11:41:29.302789 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-69b4887dfd-kqws5" Nov 24 11:41:29 crc kubenswrapper[4789]: I1124 11:41:29.724848 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-69b4887dfd-kqws5" Nov 24 11:41:29 crc kubenswrapper[4789]: I1124 11:41:29.825009 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-ljwn7"] Nov 24 11:41:39 crc kubenswrapper[4789]: I1124 11:41:39.499180 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-lgcsz" Nov 24 11:41:50 crc kubenswrapper[4789]: I1124 11:41:50.163632 4789 patch_prober.go:28] interesting pod/machine-config-daemon-9czvn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:41:50 crc kubenswrapper[4789]: I1124 11:41:50.164263 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:41:53 crc kubenswrapper[4789]: I1124 11:41:53.565976 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6pg4rd"] Nov 24 11:41:53 crc kubenswrapper[4789]: I1124 11:41:53.568507 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6pg4rd" Nov 24 11:41:53 crc kubenswrapper[4789]: I1124 11:41:53.570327 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 24 11:41:53 crc kubenswrapper[4789]: I1124 11:41:53.582362 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6pg4rd"] Nov 24 11:41:53 crc kubenswrapper[4789]: I1124 11:41:53.649049 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/97143caa-58b4-4d96-a4c7-9ec1bb364425-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6pg4rd\" (UID: \"97143caa-58b4-4d96-a4c7-9ec1bb364425\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6pg4rd" Nov 24 11:41:53 crc kubenswrapper[4789]: I1124 11:41:53.649105 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lxmb\" (UniqueName: \"kubernetes.io/projected/97143caa-58b4-4d96-a4c7-9ec1bb364425-kube-api-access-9lxmb\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6pg4rd\" (UID: \"97143caa-58b4-4d96-a4c7-9ec1bb364425\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6pg4rd" Nov 24 11:41:53 crc kubenswrapper[4789]: I1124 11:41:53.649221 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/97143caa-58b4-4d96-a4c7-9ec1bb364425-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6pg4rd\" (UID: \"97143caa-58b4-4d96-a4c7-9ec1bb364425\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6pg4rd" Nov 24 11:41:53 crc kubenswrapper[4789]: I1124 11:41:53.749973 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/97143caa-58b4-4d96-a4c7-9ec1bb364425-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6pg4rd\" (UID: \"97143caa-58b4-4d96-a4c7-9ec1bb364425\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6pg4rd" Nov 24 11:41:53 crc kubenswrapper[4789]: I1124 11:41:53.750021 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lxmb\" (UniqueName: \"kubernetes.io/projected/97143caa-58b4-4d96-a4c7-9ec1bb364425-kube-api-access-9lxmb\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6pg4rd\" (UID: \"97143caa-58b4-4d96-a4c7-9ec1bb364425\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6pg4rd" Nov 24 11:41:53 crc kubenswrapper[4789]: I1124 11:41:53.750091 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/97143caa-58b4-4d96-a4c7-9ec1bb364425-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6pg4rd\" (UID: \"97143caa-58b4-4d96-a4c7-9ec1bb364425\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6pg4rd" Nov 24 11:41:53 crc kubenswrapper[4789]: I1124 11:41:53.750426 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/97143caa-58b4-4d96-a4c7-9ec1bb364425-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6pg4rd\" (UID: \"97143caa-58b4-4d96-a4c7-9ec1bb364425\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6pg4rd" Nov 24 11:41:53 crc kubenswrapper[4789]: I1124 11:41:53.750519 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/97143caa-58b4-4d96-a4c7-9ec1bb364425-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6pg4rd\" (UID: \"97143caa-58b4-4d96-a4c7-9ec1bb364425\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6pg4rd" Nov 24 11:41:53 crc kubenswrapper[4789]: I1124 11:41:53.779936 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lxmb\" (UniqueName: \"kubernetes.io/projected/97143caa-58b4-4d96-a4c7-9ec1bb364425-kube-api-access-9lxmb\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6pg4rd\" (UID: \"97143caa-58b4-4d96-a4c7-9ec1bb364425\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6pg4rd" Nov 24 11:41:53 crc kubenswrapper[4789]: I1124 11:41:53.934709 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6pg4rd" Nov 24 11:41:54 crc kubenswrapper[4789]: I1124 11:41:54.351609 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6pg4rd"] Nov 24 11:41:54 crc kubenswrapper[4789]: I1124 11:41:54.874009 4789 generic.go:334] "Generic (PLEG): container finished" podID="97143caa-58b4-4d96-a4c7-9ec1bb364425" containerID="54bb15ce2e62d0f31029e8ccb14e018c60816c72535da718a52e74010d983271" exitCode=0 Nov 24 11:41:54 crc kubenswrapper[4789]: I1124 11:41:54.874070 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6pg4rd" event={"ID":"97143caa-58b4-4d96-a4c7-9ec1bb364425","Type":"ContainerDied","Data":"54bb15ce2e62d0f31029e8ccb14e018c60816c72535da718a52e74010d983271"} Nov 24 11:41:54 crc kubenswrapper[4789]: I1124 11:41:54.874099 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6pg4rd" event={"ID":"97143caa-58b4-4d96-a4c7-9ec1bb364425","Type":"ContainerStarted","Data":"ebab66eab60552171b635808d33b642b31e8870a6c346384b4962a9773111ce4"} Nov 24 11:41:54 crc kubenswrapper[4789]: I1124 11:41:54.879808 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-ljwn7" podUID="c9a07607-7a0f-4436-a3bc-9bd2cbf61663" containerName="console" containerID="cri-o://67cb5c038d9a3da38c1770750bc406619678d32a8b33c32bfe90fd7030d0b93a" gracePeriod=15 Nov 24 11:41:55 crc kubenswrapper[4789]: I1124 11:41:55.244496 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-ljwn7_c9a07607-7a0f-4436-a3bc-9bd2cbf61663/console/0.log" Nov 24 11:41:55 crc kubenswrapper[4789]: I1124 11:41:55.244561 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ljwn7" Nov 24 11:41:55 crc kubenswrapper[4789]: I1124 11:41:55.270158 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6klb\" (UniqueName: \"kubernetes.io/projected/c9a07607-7a0f-4436-a3bc-9bd2cbf61663-kube-api-access-r6klb\") pod \"c9a07607-7a0f-4436-a3bc-9bd2cbf61663\" (UID: \"c9a07607-7a0f-4436-a3bc-9bd2cbf61663\") " Nov 24 11:41:55 crc kubenswrapper[4789]: I1124 11:41:55.270200 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c9a07607-7a0f-4436-a3bc-9bd2cbf61663-service-ca\") pod \"c9a07607-7a0f-4436-a3bc-9bd2cbf61663\" (UID: \"c9a07607-7a0f-4436-a3bc-9bd2cbf61663\") " Nov 24 11:41:55 crc kubenswrapper[4789]: I1124 11:41:55.270234 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c9a07607-7a0f-4436-a3bc-9bd2cbf61663-console-config\") pod \"c9a07607-7a0f-4436-a3bc-9bd2cbf61663\" (UID: \"c9a07607-7a0f-4436-a3bc-9bd2cbf61663\") " Nov 24 11:41:55 crc kubenswrapper[4789]: I1124 11:41:55.270262 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9a07607-7a0f-4436-a3bc-9bd2cbf61663-trusted-ca-bundle\") pod \"c9a07607-7a0f-4436-a3bc-9bd2cbf61663\" (UID: \"c9a07607-7a0f-4436-a3bc-9bd2cbf61663\") " Nov 24 11:41:55 crc kubenswrapper[4789]: I1124 11:41:55.270329 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c9a07607-7a0f-4436-a3bc-9bd2cbf61663-oauth-serving-cert\") pod \"c9a07607-7a0f-4436-a3bc-9bd2cbf61663\" (UID: \"c9a07607-7a0f-4436-a3bc-9bd2cbf61663\") " Nov 24 11:41:55 crc kubenswrapper[4789]: I1124 11:41:55.270377 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9a07607-7a0f-4436-a3bc-9bd2cbf61663-console-serving-cert\") pod \"c9a07607-7a0f-4436-a3bc-9bd2cbf61663\" (UID: \"c9a07607-7a0f-4436-a3bc-9bd2cbf61663\") " Nov 24 11:41:55 crc kubenswrapper[4789]: I1124 11:41:55.270395 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c9a07607-7a0f-4436-a3bc-9bd2cbf61663-console-oauth-config\") pod \"c9a07607-7a0f-4436-a3bc-9bd2cbf61663\" (UID: \"c9a07607-7a0f-4436-a3bc-9bd2cbf61663\") " Nov 24 11:41:55 crc kubenswrapper[4789]: I1124 11:41:55.271431 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9a07607-7a0f-4436-a3bc-9bd2cbf61663-console-config" (OuterVolumeSpecName: "console-config") pod "c9a07607-7a0f-4436-a3bc-9bd2cbf61663" (UID: "c9a07607-7a0f-4436-a3bc-9bd2cbf61663"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:41:55 crc kubenswrapper[4789]: I1124 11:41:55.271807 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9a07607-7a0f-4436-a3bc-9bd2cbf61663-service-ca" (OuterVolumeSpecName: "service-ca") pod "c9a07607-7a0f-4436-a3bc-9bd2cbf61663" (UID: "c9a07607-7a0f-4436-a3bc-9bd2cbf61663"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:41:55 crc kubenswrapper[4789]: I1124 11:41:55.272024 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9a07607-7a0f-4436-a3bc-9bd2cbf61663-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "c9a07607-7a0f-4436-a3bc-9bd2cbf61663" (UID: "c9a07607-7a0f-4436-a3bc-9bd2cbf61663"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:41:55 crc kubenswrapper[4789]: I1124 11:41:55.272094 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9a07607-7a0f-4436-a3bc-9bd2cbf61663-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "c9a07607-7a0f-4436-a3bc-9bd2cbf61663" (UID: "c9a07607-7a0f-4436-a3bc-9bd2cbf61663"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:41:55 crc kubenswrapper[4789]: I1124 11:41:55.277426 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9a07607-7a0f-4436-a3bc-9bd2cbf61663-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "c9a07607-7a0f-4436-a3bc-9bd2cbf61663" (UID: "c9a07607-7a0f-4436-a3bc-9bd2cbf61663"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:41:55 crc kubenswrapper[4789]: I1124 11:41:55.278658 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9a07607-7a0f-4436-a3bc-9bd2cbf61663-kube-api-access-r6klb" (OuterVolumeSpecName: "kube-api-access-r6klb") pod "c9a07607-7a0f-4436-a3bc-9bd2cbf61663" (UID: "c9a07607-7a0f-4436-a3bc-9bd2cbf61663"). InnerVolumeSpecName "kube-api-access-r6klb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:41:55 crc kubenswrapper[4789]: I1124 11:41:55.278704 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9a07607-7a0f-4436-a3bc-9bd2cbf61663-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "c9a07607-7a0f-4436-a3bc-9bd2cbf61663" (UID: "c9a07607-7a0f-4436-a3bc-9bd2cbf61663"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:41:55 crc kubenswrapper[4789]: I1124 11:41:55.371531 4789 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c9a07607-7a0f-4436-a3bc-9bd2cbf61663-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:55 crc kubenswrapper[4789]: I1124 11:41:55.371565 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6klb\" (UniqueName: \"kubernetes.io/projected/c9a07607-7a0f-4436-a3bc-9bd2cbf61663-kube-api-access-r6klb\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:55 crc kubenswrapper[4789]: I1124 11:41:55.371579 4789 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c9a07607-7a0f-4436-a3bc-9bd2cbf61663-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:55 crc kubenswrapper[4789]: I1124 11:41:55.371591 4789 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c9a07607-7a0f-4436-a3bc-9bd2cbf61663-console-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:55 crc kubenswrapper[4789]: I1124 11:41:55.371604 4789 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9a07607-7a0f-4436-a3bc-9bd2cbf61663-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:55 crc kubenswrapper[4789]: I1124 11:41:55.371614 4789 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c9a07607-7a0f-4436-a3bc-9bd2cbf61663-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:55 crc kubenswrapper[4789]: I1124 11:41:55.371625 4789 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9a07607-7a0f-4436-a3bc-9bd2cbf61663-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:55 crc kubenswrapper[4789]: I1124 11:41:55.885744 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-ljwn7_c9a07607-7a0f-4436-a3bc-9bd2cbf61663/console/0.log" Nov 24 11:41:55 crc kubenswrapper[4789]: I1124 11:41:55.885829 4789 generic.go:334] "Generic (PLEG): container finished" podID="c9a07607-7a0f-4436-a3bc-9bd2cbf61663" containerID="67cb5c038d9a3da38c1770750bc406619678d32a8b33c32bfe90fd7030d0b93a" exitCode=2 Nov 24 11:41:55 crc kubenswrapper[4789]: I1124 11:41:55.885922 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ljwn7" Nov 24 11:41:55 crc kubenswrapper[4789]: I1124 11:41:55.885938 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ljwn7" event={"ID":"c9a07607-7a0f-4436-a3bc-9bd2cbf61663","Type":"ContainerDied","Data":"67cb5c038d9a3da38c1770750bc406619678d32a8b33c32bfe90fd7030d0b93a"} Nov 24 11:41:55 crc kubenswrapper[4789]: I1124 11:41:55.886125 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ljwn7" event={"ID":"c9a07607-7a0f-4436-a3bc-9bd2cbf61663","Type":"ContainerDied","Data":"707e56f3e14c9e6be4c0a5f7c120587f7571d2c267d7f3012435986cb28c2707"} Nov 24 11:41:55 crc kubenswrapper[4789]: I1124 11:41:55.886188 4789 scope.go:117] "RemoveContainer" containerID="67cb5c038d9a3da38c1770750bc406619678d32a8b33c32bfe90fd7030d0b93a" Nov 24 11:41:55 crc kubenswrapper[4789]: I1124 11:41:55.921252 4789 scope.go:117] "RemoveContainer" containerID="67cb5c038d9a3da38c1770750bc406619678d32a8b33c32bfe90fd7030d0b93a" Nov 24 11:41:55 crc kubenswrapper[4789]: E1124 11:41:55.922065 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67cb5c038d9a3da38c1770750bc406619678d32a8b33c32bfe90fd7030d0b93a\": container with ID starting with 67cb5c038d9a3da38c1770750bc406619678d32a8b33c32bfe90fd7030d0b93a not found: ID does not exist" containerID="67cb5c038d9a3da38c1770750bc406619678d32a8b33c32bfe90fd7030d0b93a" Nov 24 11:41:55 crc kubenswrapper[4789]: I1124 11:41:55.922179 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67cb5c038d9a3da38c1770750bc406619678d32a8b33c32bfe90fd7030d0b93a"} err="failed to get container status \"67cb5c038d9a3da38c1770750bc406619678d32a8b33c32bfe90fd7030d0b93a\": rpc error: code = NotFound desc = could not find container \"67cb5c038d9a3da38c1770750bc406619678d32a8b33c32bfe90fd7030d0b93a\": container with ID starting with 67cb5c038d9a3da38c1770750bc406619678d32a8b33c32bfe90fd7030d0b93a not found: ID does not exist" Nov 24 11:41:55 crc kubenswrapper[4789]: I1124 11:41:55.940588 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-ljwn7"] Nov 24 11:41:55 crc kubenswrapper[4789]: I1124 11:41:55.945626 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-ljwn7"] Nov 24 11:41:56 crc kubenswrapper[4789]: I1124 11:41:56.175517 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9a07607-7a0f-4436-a3bc-9bd2cbf61663" path="/var/lib/kubelet/pods/c9a07607-7a0f-4436-a3bc-9bd2cbf61663/volumes" Nov 24 11:41:56 crc kubenswrapper[4789]: I1124 11:41:56.895056 4789 generic.go:334] "Generic (PLEG): container finished" podID="97143caa-58b4-4d96-a4c7-9ec1bb364425" containerID="fd38dd8a4226df5c444af4bbc26601b35ebe96dd13d26aab6c64a2a788e459c1" exitCode=0 Nov 24 11:41:56 crc kubenswrapper[4789]: I1124 11:41:56.895117 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6pg4rd" event={"ID":"97143caa-58b4-4d96-a4c7-9ec1bb364425","Type":"ContainerDied","Data":"fd38dd8a4226df5c444af4bbc26601b35ebe96dd13d26aab6c64a2a788e459c1"} Nov 24 11:41:57 crc kubenswrapper[4789]: I1124 11:41:57.906758 4789 generic.go:334] "Generic (PLEG): container finished" podID="97143caa-58b4-4d96-a4c7-9ec1bb364425" containerID="accbfda8f12f292fbb2ea197a352ac6ec17ae2221bc4a82a08043ec674d22ea6" exitCode=0 Nov 24 11:41:57 crc kubenswrapper[4789]: I1124 11:41:57.906806 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6pg4rd" event={"ID":"97143caa-58b4-4d96-a4c7-9ec1bb364425","Type":"ContainerDied","Data":"accbfda8f12f292fbb2ea197a352ac6ec17ae2221bc4a82a08043ec674d22ea6"} Nov 24 11:41:59 crc kubenswrapper[4789]: I1124 11:41:59.204074 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6pg4rd" Nov 24 11:41:59 crc kubenswrapper[4789]: I1124 11:41:59.322116 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/97143caa-58b4-4d96-a4c7-9ec1bb364425-bundle\") pod \"97143caa-58b4-4d96-a4c7-9ec1bb364425\" (UID: \"97143caa-58b4-4d96-a4c7-9ec1bb364425\") " Nov 24 11:41:59 crc kubenswrapper[4789]: I1124 11:41:59.322193 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/97143caa-58b4-4d96-a4c7-9ec1bb364425-util\") pod \"97143caa-58b4-4d96-a4c7-9ec1bb364425\" (UID: \"97143caa-58b4-4d96-a4c7-9ec1bb364425\") " Nov 24 11:41:59 crc kubenswrapper[4789]: I1124 11:41:59.322261 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lxmb\" (UniqueName: \"kubernetes.io/projected/97143caa-58b4-4d96-a4c7-9ec1bb364425-kube-api-access-9lxmb\") pod \"97143caa-58b4-4d96-a4c7-9ec1bb364425\" (UID: \"97143caa-58b4-4d96-a4c7-9ec1bb364425\") " Nov 24 11:41:59 crc kubenswrapper[4789]: I1124 11:41:59.329063 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97143caa-58b4-4d96-a4c7-9ec1bb364425-kube-api-access-9lxmb" (OuterVolumeSpecName: "kube-api-access-9lxmb") pod "97143caa-58b4-4d96-a4c7-9ec1bb364425" (UID: "97143caa-58b4-4d96-a4c7-9ec1bb364425"). InnerVolumeSpecName "kube-api-access-9lxmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:41:59 crc kubenswrapper[4789]: I1124 11:41:59.332214 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97143caa-58b4-4d96-a4c7-9ec1bb364425-bundle" (OuterVolumeSpecName: "bundle") pod "97143caa-58b4-4d96-a4c7-9ec1bb364425" (UID: "97143caa-58b4-4d96-a4c7-9ec1bb364425"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:41:59 crc kubenswrapper[4789]: I1124 11:41:59.338799 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97143caa-58b4-4d96-a4c7-9ec1bb364425-util" (OuterVolumeSpecName: "util") pod "97143caa-58b4-4d96-a4c7-9ec1bb364425" (UID: "97143caa-58b4-4d96-a4c7-9ec1bb364425"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:41:59 crc kubenswrapper[4789]: I1124 11:41:59.423818 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lxmb\" (UniqueName: \"kubernetes.io/projected/97143caa-58b4-4d96-a4c7-9ec1bb364425-kube-api-access-9lxmb\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:59 crc kubenswrapper[4789]: I1124 11:41:59.423856 4789 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/97143caa-58b4-4d96-a4c7-9ec1bb364425-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:59 crc kubenswrapper[4789]: I1124 11:41:59.423868 4789 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/97143caa-58b4-4d96-a4c7-9ec1bb364425-util\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:59 crc kubenswrapper[4789]: I1124 11:41:59.924761 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6pg4rd" event={"ID":"97143caa-58b4-4d96-a4c7-9ec1bb364425","Type":"ContainerDied","Data":"ebab66eab60552171b635808d33b642b31e8870a6c346384b4962a9773111ce4"} Nov 24 11:41:59 crc kubenswrapper[4789]: I1124 11:41:59.924826 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebab66eab60552171b635808d33b642b31e8870a6c346384b4962a9773111ce4" Nov 24 11:41:59 crc kubenswrapper[4789]: I1124 11:41:59.924858 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6pg4rd" Nov 24 11:42:08 crc kubenswrapper[4789]: I1124 11:42:08.576865 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5c78669894-4cs4c"] Nov 24 11:42:08 crc kubenswrapper[4789]: E1124 11:42:08.577587 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97143caa-58b4-4d96-a4c7-9ec1bb364425" containerName="extract" Nov 24 11:42:08 crc kubenswrapper[4789]: I1124 11:42:08.577600 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="97143caa-58b4-4d96-a4c7-9ec1bb364425" containerName="extract" Nov 24 11:42:08 crc kubenswrapper[4789]: E1124 11:42:08.577614 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97143caa-58b4-4d96-a4c7-9ec1bb364425" containerName="pull" Nov 24 11:42:08 crc kubenswrapper[4789]: I1124 11:42:08.577619 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="97143caa-58b4-4d96-a4c7-9ec1bb364425" containerName="pull" Nov 24 11:42:08 crc kubenswrapper[4789]: E1124 11:42:08.577627 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97143caa-58b4-4d96-a4c7-9ec1bb364425" containerName="util" Nov 24 11:42:08 crc kubenswrapper[4789]: I1124 11:42:08.577633 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="97143caa-58b4-4d96-a4c7-9ec1bb364425" containerName="util" Nov 24 11:42:08 crc kubenswrapper[4789]: E1124 11:42:08.577652 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9a07607-7a0f-4436-a3bc-9bd2cbf61663" containerName="console" Nov 24 11:42:08 crc kubenswrapper[4789]: I1124 11:42:08.577658 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9a07607-7a0f-4436-a3bc-9bd2cbf61663" containerName="console" Nov 24 11:42:08 crc kubenswrapper[4789]: I1124 11:42:08.577744 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="97143caa-58b4-4d96-a4c7-9ec1bb364425" containerName="extract" Nov 24 11:42:08 crc kubenswrapper[4789]: I1124 11:42:08.577756 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9a07607-7a0f-4436-a3bc-9bd2cbf61663" containerName="console" Nov 24 11:42:08 crc kubenswrapper[4789]: I1124 11:42:08.578121 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5c78669894-4cs4c" Nov 24 11:42:08 crc kubenswrapper[4789]: I1124 11:42:08.582249 4789 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-xgpgj" Nov 24 11:42:08 crc kubenswrapper[4789]: I1124 11:42:08.582287 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 24 11:42:08 crc kubenswrapper[4789]: I1124 11:42:08.582379 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 24 11:42:08 crc kubenswrapper[4789]: I1124 11:42:08.582438 4789 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 24 11:42:08 crc kubenswrapper[4789]: I1124 11:42:08.582523 4789 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 24 11:42:08 crc kubenswrapper[4789]: I1124 11:42:08.592889 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5c78669894-4cs4c"] Nov 24 11:42:08 crc kubenswrapper[4789]: I1124 11:42:08.735364 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffhw4\" (UniqueName: \"kubernetes.io/projected/79ff7401-87f7-494c-8b09-aa9fc59a934b-kube-api-access-ffhw4\") pod \"metallb-operator-controller-manager-5c78669894-4cs4c\" (UID: \"79ff7401-87f7-494c-8b09-aa9fc59a934b\") " pod="metallb-system/metallb-operator-controller-manager-5c78669894-4cs4c" Nov 24 11:42:08 crc kubenswrapper[4789]: I1124 11:42:08.735406 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/79ff7401-87f7-494c-8b09-aa9fc59a934b-webhook-cert\") pod \"metallb-operator-controller-manager-5c78669894-4cs4c\" (UID: \"79ff7401-87f7-494c-8b09-aa9fc59a934b\") " pod="metallb-system/metallb-operator-controller-manager-5c78669894-4cs4c" Nov 24 11:42:08 crc kubenswrapper[4789]: I1124 11:42:08.735483 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/79ff7401-87f7-494c-8b09-aa9fc59a934b-apiservice-cert\") pod \"metallb-operator-controller-manager-5c78669894-4cs4c\" (UID: \"79ff7401-87f7-494c-8b09-aa9fc59a934b\") " pod="metallb-system/metallb-operator-controller-manager-5c78669894-4cs4c" Nov 24 11:42:08 crc kubenswrapper[4789]: I1124 11:42:08.836554 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffhw4\" (UniqueName: \"kubernetes.io/projected/79ff7401-87f7-494c-8b09-aa9fc59a934b-kube-api-access-ffhw4\") pod \"metallb-operator-controller-manager-5c78669894-4cs4c\" (UID: \"79ff7401-87f7-494c-8b09-aa9fc59a934b\") " pod="metallb-system/metallb-operator-controller-manager-5c78669894-4cs4c" Nov 24 11:42:08 crc kubenswrapper[4789]: I1124 11:42:08.836601 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/79ff7401-87f7-494c-8b09-aa9fc59a934b-webhook-cert\") pod \"metallb-operator-controller-manager-5c78669894-4cs4c\" (UID: \"79ff7401-87f7-494c-8b09-aa9fc59a934b\") " pod="metallb-system/metallb-operator-controller-manager-5c78669894-4cs4c" Nov 24 11:42:08 crc kubenswrapper[4789]: I1124 11:42:08.836656 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/79ff7401-87f7-494c-8b09-aa9fc59a934b-apiservice-cert\") pod \"metallb-operator-controller-manager-5c78669894-4cs4c\" (UID: \"79ff7401-87f7-494c-8b09-aa9fc59a934b\") " pod="metallb-system/metallb-operator-controller-manager-5c78669894-4cs4c" Nov 24 11:42:08 crc kubenswrapper[4789]: I1124 11:42:08.842220 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/79ff7401-87f7-494c-8b09-aa9fc59a934b-webhook-cert\") pod \"metallb-operator-controller-manager-5c78669894-4cs4c\" (UID: \"79ff7401-87f7-494c-8b09-aa9fc59a934b\") " pod="metallb-system/metallb-operator-controller-manager-5c78669894-4cs4c" Nov 24 11:42:08 crc kubenswrapper[4789]: I1124 11:42:08.851265 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/79ff7401-87f7-494c-8b09-aa9fc59a934b-apiservice-cert\") pod \"metallb-operator-controller-manager-5c78669894-4cs4c\" (UID: \"79ff7401-87f7-494c-8b09-aa9fc59a934b\") " pod="metallb-system/metallb-operator-controller-manager-5c78669894-4cs4c" Nov 24 11:42:08 crc kubenswrapper[4789]: I1124 11:42:08.854400 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffhw4\" (UniqueName: \"kubernetes.io/projected/79ff7401-87f7-494c-8b09-aa9fc59a934b-kube-api-access-ffhw4\") pod \"metallb-operator-controller-manager-5c78669894-4cs4c\" (UID: \"79ff7401-87f7-494c-8b09-aa9fc59a934b\") " pod="metallb-system/metallb-operator-controller-manager-5c78669894-4cs4c" Nov 24 11:42:08 crc kubenswrapper[4789]: I1124 11:42:08.894936 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5c78669894-4cs4c" Nov 24 11:42:08 crc kubenswrapper[4789]: I1124 11:42:08.948820 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-79c46fb6f4-rtbcf"] Nov 24 11:42:08 crc kubenswrapper[4789]: I1124 11:42:08.950229 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-79c46fb6f4-rtbcf" Nov 24 11:42:08 crc kubenswrapper[4789]: I1124 11:42:08.955639 4789 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 24 11:42:08 crc kubenswrapper[4789]: I1124 11:42:08.956117 4789 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 24 11:42:08 crc kubenswrapper[4789]: I1124 11:42:08.956301 4789 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-47n8s" Nov 24 11:42:08 crc kubenswrapper[4789]: I1124 11:42:08.964937 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-79c46fb6f4-rtbcf"] Nov 24 11:42:09 crc kubenswrapper[4789]: I1124 11:42:09.206434 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/689fdd74-d64c-431d-a036-babb90542dd8-webhook-cert\") pod \"metallb-operator-webhook-server-79c46fb6f4-rtbcf\" (UID: \"689fdd74-d64c-431d-a036-babb90542dd8\") " pod="metallb-system/metallb-operator-webhook-server-79c46fb6f4-rtbcf" Nov 24 11:42:09 crc kubenswrapper[4789]: I1124 11:42:09.206562 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/689fdd74-d64c-431d-a036-babb90542dd8-apiservice-cert\") pod \"metallb-operator-webhook-server-79c46fb6f4-rtbcf\" (UID: \"689fdd74-d64c-431d-a036-babb90542dd8\") " pod="metallb-system/metallb-operator-webhook-server-79c46fb6f4-rtbcf" Nov 24 11:42:09 crc kubenswrapper[4789]: I1124 11:42:09.206592 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpbvq\" (UniqueName: \"kubernetes.io/projected/689fdd74-d64c-431d-a036-babb90542dd8-kube-api-access-wpbvq\") pod \"metallb-operator-webhook-server-79c46fb6f4-rtbcf\" (UID: \"689fdd74-d64c-431d-a036-babb90542dd8\") " pod="metallb-system/metallb-operator-webhook-server-79c46fb6f4-rtbcf" Nov 24 11:42:09 crc kubenswrapper[4789]: I1124 11:42:09.307309 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/689fdd74-d64c-431d-a036-babb90542dd8-webhook-cert\") pod \"metallb-operator-webhook-server-79c46fb6f4-rtbcf\" (UID: \"689fdd74-d64c-431d-a036-babb90542dd8\") " pod="metallb-system/metallb-operator-webhook-server-79c46fb6f4-rtbcf" Nov 24 11:42:09 crc kubenswrapper[4789]: I1124 11:42:09.307376 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/689fdd74-d64c-431d-a036-babb90542dd8-apiservice-cert\") pod \"metallb-operator-webhook-server-79c46fb6f4-rtbcf\" (UID: \"689fdd74-d64c-431d-a036-babb90542dd8\") " pod="metallb-system/metallb-operator-webhook-server-79c46fb6f4-rtbcf" Nov 24 11:42:09 crc kubenswrapper[4789]: I1124 11:42:09.307403 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpbvq\" (UniqueName: \"kubernetes.io/projected/689fdd74-d64c-431d-a036-babb90542dd8-kube-api-access-wpbvq\") pod \"metallb-operator-webhook-server-79c46fb6f4-rtbcf\" (UID: \"689fdd74-d64c-431d-a036-babb90542dd8\") " pod="metallb-system/metallb-operator-webhook-server-79c46fb6f4-rtbcf" Nov 24 11:42:09 crc kubenswrapper[4789]: I1124 11:42:09.312244 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/689fdd74-d64c-431d-a036-babb90542dd8-apiservice-cert\") pod \"metallb-operator-webhook-server-79c46fb6f4-rtbcf\" (UID: \"689fdd74-d64c-431d-a036-babb90542dd8\") " pod="metallb-system/metallb-operator-webhook-server-79c46fb6f4-rtbcf" Nov 24 11:42:09 crc kubenswrapper[4789]: I1124 11:42:09.325368 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/689fdd74-d64c-431d-a036-babb90542dd8-webhook-cert\") pod \"metallb-operator-webhook-server-79c46fb6f4-rtbcf\" (UID: \"689fdd74-d64c-431d-a036-babb90542dd8\") " pod="metallb-system/metallb-operator-webhook-server-79c46fb6f4-rtbcf" Nov 24 11:42:09 crc kubenswrapper[4789]: I1124 11:42:09.333638 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpbvq\" (UniqueName: \"kubernetes.io/projected/689fdd74-d64c-431d-a036-babb90542dd8-kube-api-access-wpbvq\") pod \"metallb-operator-webhook-server-79c46fb6f4-rtbcf\" (UID: \"689fdd74-d64c-431d-a036-babb90542dd8\") " pod="metallb-system/metallb-operator-webhook-server-79c46fb6f4-rtbcf" Nov 24 11:42:09 crc kubenswrapper[4789]: I1124 11:42:09.371058 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5c78669894-4cs4c"] Nov 24 11:42:09 crc kubenswrapper[4789]: I1124 11:42:09.568476 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-79c46fb6f4-rtbcf" Nov 24 11:42:09 crc kubenswrapper[4789]: I1124 11:42:09.771037 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-79c46fb6f4-rtbcf"] Nov 24 11:42:09 crc kubenswrapper[4789]: W1124 11:42:09.778985 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod689fdd74_d64c_431d_a036_babb90542dd8.slice/crio-1616cdd7af4e95321342672a8d343ab3a5522060f05b42d3dbff2a0a3002d40b WatchSource:0}: Error finding container 1616cdd7af4e95321342672a8d343ab3a5522060f05b42d3dbff2a0a3002d40b: Status 404 returned error can't find the container with id 1616cdd7af4e95321342672a8d343ab3a5522060f05b42d3dbff2a0a3002d40b Nov 24 11:42:09 crc kubenswrapper[4789]: I1124 11:42:09.980247 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-79c46fb6f4-rtbcf" event={"ID":"689fdd74-d64c-431d-a036-babb90542dd8","Type":"ContainerStarted","Data":"1616cdd7af4e95321342672a8d343ab3a5522060f05b42d3dbff2a0a3002d40b"} Nov 24 11:42:09 crc kubenswrapper[4789]: I1124 11:42:09.981572 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5c78669894-4cs4c" event={"ID":"79ff7401-87f7-494c-8b09-aa9fc59a934b","Type":"ContainerStarted","Data":"dea3c529ed32652b11d0e1fbd440abcc727f3bb3369b078cfb564646a8a44ef2"} Nov 24 11:42:13 crc kubenswrapper[4789]: I1124 11:42:13.000532 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5c78669894-4cs4c" event={"ID":"79ff7401-87f7-494c-8b09-aa9fc59a934b","Type":"ContainerStarted","Data":"9c64f0e93dca67bf6a389f775c745340ccc251f19416b79a2acf5ddc44866a6a"} Nov 24 11:42:13 crc kubenswrapper[4789]: I1124 11:42:13.001056 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5c78669894-4cs4c" Nov 24 11:42:13 crc kubenswrapper[4789]: I1124 11:42:13.027038 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5c78669894-4cs4c" podStartSLOduration=1.804364998 podStartE2EDuration="5.027023701s" podCreationTimestamp="2025-11-24 11:42:08 +0000 UTC" firstStartedPulling="2025-11-24 11:42:09.391680198 +0000 UTC m=+711.974151577" lastFinishedPulling="2025-11-24 11:42:12.614338901 +0000 UTC m=+715.196810280" observedRunningTime="2025-11-24 11:42:13.022921389 +0000 UTC m=+715.605392768" watchObservedRunningTime="2025-11-24 11:42:13.027023701 +0000 UTC m=+715.609495080" Nov 24 11:42:16 crc kubenswrapper[4789]: I1124 11:42:16.018617 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-79c46fb6f4-rtbcf" event={"ID":"689fdd74-d64c-431d-a036-babb90542dd8","Type":"ContainerStarted","Data":"5c3d9fe96ffb6e23b091fcd5b302d36c4c5225a3e34f843848da8bf943b4afde"} Nov 24 11:42:16 crc kubenswrapper[4789]: I1124 11:42:16.018912 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-79c46fb6f4-rtbcf" Nov 24 11:42:16 crc kubenswrapper[4789]: I1124 11:42:16.040061 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-79c46fb6f4-rtbcf" podStartSLOduration=2.897309447 podStartE2EDuration="8.040044119s" podCreationTimestamp="2025-11-24 11:42:08 +0000 UTC" firstStartedPulling="2025-11-24 11:42:09.781445477 +0000 UTC m=+712.363916856" lastFinishedPulling="2025-11-24 11:42:14.924180149 +0000 UTC m=+717.506651528" observedRunningTime="2025-11-24 11:42:16.038923232 +0000 UTC m=+718.621394601" watchObservedRunningTime="2025-11-24 11:42:16.040044119 +0000 UTC m=+718.622515498" Nov 24 11:42:20 crc kubenswrapper[4789]: I1124 11:42:20.162713 4789 patch_prober.go:28] interesting pod/machine-config-daemon-9czvn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:42:20 crc kubenswrapper[4789]: I1124 11:42:20.163247 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:42:29 crc kubenswrapper[4789]: I1124 11:42:29.578569 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-79c46fb6f4-rtbcf" Nov 24 11:42:44 crc kubenswrapper[4789]: I1124 11:42:44.631757 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-j4swj"] Nov 24 11:42:44 crc kubenswrapper[4789]: I1124 11:42:44.632614 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-j4swj" podUID="4372e46e-19ca-487e-b2ee-1fea92a3197d" containerName="controller-manager" containerID="cri-o://1335bcac9aff2ce299a3bd26ac7e1f352a11123436a5485613284ba1a0d09759" gracePeriod=30 Nov 24 11:42:44 crc kubenswrapper[4789]: I1124 11:42:44.723119 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5lt8v"] Nov 24 11:42:44 crc kubenswrapper[4789]: I1124 11:42:44.723323 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5lt8v" podUID="584e1901-c470-4a3f-9461-7e97f4688399" containerName="route-controller-manager" containerID="cri-o://2d643dd176cbbbfb94a6977ed6171aa3f70d99a970c73ea87f8c4d28fb513006" gracePeriod=30 Nov 24 11:42:45 crc kubenswrapper[4789]: I1124 11:42:45.134288 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-j4swj" Nov 24 11:42:45 crc kubenswrapper[4789]: I1124 11:42:45.208097 4789 generic.go:334] "Generic (PLEG): container finished" podID="4372e46e-19ca-487e-b2ee-1fea92a3197d" containerID="1335bcac9aff2ce299a3bd26ac7e1f352a11123436a5485613284ba1a0d09759" exitCode=0 Nov 24 11:42:45 crc kubenswrapper[4789]: I1124 11:42:45.208158 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-j4swj" event={"ID":"4372e46e-19ca-487e-b2ee-1fea92a3197d","Type":"ContainerDied","Data":"1335bcac9aff2ce299a3bd26ac7e1f352a11123436a5485613284ba1a0d09759"} Nov 24 11:42:45 crc kubenswrapper[4789]: I1124 11:42:45.208192 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-j4swj" event={"ID":"4372e46e-19ca-487e-b2ee-1fea92a3197d","Type":"ContainerDied","Data":"fe74cd8802aba7446cac66550d0662301bf653c185f7d8caea30ef60479bccfd"} Nov 24 11:42:45 crc kubenswrapper[4789]: I1124 11:42:45.208208 4789 scope.go:117] "RemoveContainer" containerID="1335bcac9aff2ce299a3bd26ac7e1f352a11123436a5485613284ba1a0d09759" Nov 24 11:42:45 crc kubenswrapper[4789]: I1124 11:42:45.208243 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-j4swj" Nov 24 11:42:45 crc kubenswrapper[4789]: I1124 11:42:45.211920 4789 generic.go:334] "Generic (PLEG): container finished" podID="584e1901-c470-4a3f-9461-7e97f4688399" containerID="2d643dd176cbbbfb94a6977ed6171aa3f70d99a970c73ea87f8c4d28fb513006" exitCode=0 Nov 24 11:42:45 crc kubenswrapper[4789]: I1124 11:42:45.211950 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5lt8v" event={"ID":"584e1901-c470-4a3f-9461-7e97f4688399","Type":"ContainerDied","Data":"2d643dd176cbbbfb94a6977ed6171aa3f70d99a970c73ea87f8c4d28fb513006"} Nov 24 11:42:45 crc kubenswrapper[4789]: I1124 11:42:45.234343 4789 scope.go:117] "RemoveContainer" containerID="1335bcac9aff2ce299a3bd26ac7e1f352a11123436a5485613284ba1a0d09759" Nov 24 11:42:45 crc kubenswrapper[4789]: E1124 11:42:45.234817 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1335bcac9aff2ce299a3bd26ac7e1f352a11123436a5485613284ba1a0d09759\": container with ID starting with 1335bcac9aff2ce299a3bd26ac7e1f352a11123436a5485613284ba1a0d09759 not found: ID does not exist" containerID="1335bcac9aff2ce299a3bd26ac7e1f352a11123436a5485613284ba1a0d09759" Nov 24 11:42:45 crc kubenswrapper[4789]: I1124 11:42:45.234850 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1335bcac9aff2ce299a3bd26ac7e1f352a11123436a5485613284ba1a0d09759"} err="failed to get container status \"1335bcac9aff2ce299a3bd26ac7e1f352a11123436a5485613284ba1a0d09759\": rpc error: code = NotFound desc = could not find container \"1335bcac9aff2ce299a3bd26ac7e1f352a11123436a5485613284ba1a0d09759\": container with ID starting with 1335bcac9aff2ce299a3bd26ac7e1f352a11123436a5485613284ba1a0d09759 not found: ID does not exist" Nov 24 11:42:45 crc kubenswrapper[4789]: I1124 11:42:45.255404 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5lt8v" Nov 24 11:42:45 crc kubenswrapper[4789]: I1124 11:42:45.275089 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4372e46e-19ca-487e-b2ee-1fea92a3197d-client-ca\") pod \"4372e46e-19ca-487e-b2ee-1fea92a3197d\" (UID: \"4372e46e-19ca-487e-b2ee-1fea92a3197d\") " Nov 24 11:42:45 crc kubenswrapper[4789]: I1124 11:42:45.275139 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4372e46e-19ca-487e-b2ee-1fea92a3197d-proxy-ca-bundles\") pod \"4372e46e-19ca-487e-b2ee-1fea92a3197d\" (UID: \"4372e46e-19ca-487e-b2ee-1fea92a3197d\") " Nov 24 11:42:45 crc kubenswrapper[4789]: I1124 11:42:45.275188 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4372e46e-19ca-487e-b2ee-1fea92a3197d-config\") pod \"4372e46e-19ca-487e-b2ee-1fea92a3197d\" (UID: \"4372e46e-19ca-487e-b2ee-1fea92a3197d\") " Nov 24 11:42:45 crc kubenswrapper[4789]: I1124 11:42:45.275218 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4372e46e-19ca-487e-b2ee-1fea92a3197d-serving-cert\") pod \"4372e46e-19ca-487e-b2ee-1fea92a3197d\" (UID: \"4372e46e-19ca-487e-b2ee-1fea92a3197d\") " Nov 24 11:42:45 crc kubenswrapper[4789]: I1124 11:42:45.275248 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgxv4\" (UniqueName: \"kubernetes.io/projected/4372e46e-19ca-487e-b2ee-1fea92a3197d-kube-api-access-zgxv4\") pod \"4372e46e-19ca-487e-b2ee-1fea92a3197d\" (UID: \"4372e46e-19ca-487e-b2ee-1fea92a3197d\") " Nov 24 11:42:45 crc kubenswrapper[4789]: I1124 11:42:45.276067 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4372e46e-19ca-487e-b2ee-1fea92a3197d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4372e46e-19ca-487e-b2ee-1fea92a3197d" (UID: "4372e46e-19ca-487e-b2ee-1fea92a3197d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:42:45 crc kubenswrapper[4789]: I1124 11:42:45.276079 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4372e46e-19ca-487e-b2ee-1fea92a3197d-config" (OuterVolumeSpecName: "config") pod "4372e46e-19ca-487e-b2ee-1fea92a3197d" (UID: "4372e46e-19ca-487e-b2ee-1fea92a3197d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:42:45 crc kubenswrapper[4789]: I1124 11:42:45.276327 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4372e46e-19ca-487e-b2ee-1fea92a3197d-client-ca" (OuterVolumeSpecName: "client-ca") pod "4372e46e-19ca-487e-b2ee-1fea92a3197d" (UID: "4372e46e-19ca-487e-b2ee-1fea92a3197d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:42:45 crc kubenswrapper[4789]: I1124 11:42:45.280942 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4372e46e-19ca-487e-b2ee-1fea92a3197d-kube-api-access-zgxv4" (OuterVolumeSpecName: "kube-api-access-zgxv4") pod "4372e46e-19ca-487e-b2ee-1fea92a3197d" (UID: "4372e46e-19ca-487e-b2ee-1fea92a3197d"). InnerVolumeSpecName "kube-api-access-zgxv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:42:45 crc kubenswrapper[4789]: I1124 11:42:45.285661 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4372e46e-19ca-487e-b2ee-1fea92a3197d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4372e46e-19ca-487e-b2ee-1fea92a3197d" (UID: "4372e46e-19ca-487e-b2ee-1fea92a3197d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:42:45 crc kubenswrapper[4789]: I1124 11:42:45.378071 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57bvz\" (UniqueName: \"kubernetes.io/projected/584e1901-c470-4a3f-9461-7e97f4688399-kube-api-access-57bvz\") pod \"584e1901-c470-4a3f-9461-7e97f4688399\" (UID: \"584e1901-c470-4a3f-9461-7e97f4688399\") " Nov 24 11:42:45 crc kubenswrapper[4789]: I1124 11:42:45.378117 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/584e1901-c470-4a3f-9461-7e97f4688399-serving-cert\") pod \"584e1901-c470-4a3f-9461-7e97f4688399\" (UID: \"584e1901-c470-4a3f-9461-7e97f4688399\") " Nov 24 11:42:45 crc kubenswrapper[4789]: I1124 11:42:45.378181 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/584e1901-c470-4a3f-9461-7e97f4688399-config\") pod \"584e1901-c470-4a3f-9461-7e97f4688399\" (UID: \"584e1901-c470-4a3f-9461-7e97f4688399\") " Nov 24 11:42:45 crc kubenswrapper[4789]: I1124 11:42:45.378240 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/584e1901-c470-4a3f-9461-7e97f4688399-client-ca\") pod \"584e1901-c470-4a3f-9461-7e97f4688399\" (UID: \"584e1901-c470-4a3f-9461-7e97f4688399\") " Nov 24 11:42:45 crc kubenswrapper[4789]: I1124 11:42:45.378781 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/584e1901-c470-4a3f-9461-7e97f4688399-client-ca" (OuterVolumeSpecName: "client-ca") pod "584e1901-c470-4a3f-9461-7e97f4688399" (UID: "584e1901-c470-4a3f-9461-7e97f4688399"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:42:45 crc kubenswrapper[4789]: I1124 11:42:45.378915 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/584e1901-c470-4a3f-9461-7e97f4688399-config" (OuterVolumeSpecName: "config") pod "584e1901-c470-4a3f-9461-7e97f4688399" (UID: "584e1901-c470-4a3f-9461-7e97f4688399"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:42:45 crc kubenswrapper[4789]: I1124 11:42:45.378937 4789 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4372e46e-19ca-487e-b2ee-1fea92a3197d-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:45 crc kubenswrapper[4789]: I1124 11:42:45.379001 4789 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/584e1901-c470-4a3f-9461-7e97f4688399-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:45 crc kubenswrapper[4789]: I1124 11:42:45.379018 4789 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4372e46e-19ca-487e-b2ee-1fea92a3197d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:45 crc kubenswrapper[4789]: I1124 11:42:45.379038 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4372e46e-19ca-487e-b2ee-1fea92a3197d-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:45 crc kubenswrapper[4789]: I1124 11:42:45.379054 4789 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4372e46e-19ca-487e-b2ee-1fea92a3197d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:45 crc kubenswrapper[4789]: I1124 11:42:45.379067 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgxv4\" (UniqueName: \"kubernetes.io/projected/4372e46e-19ca-487e-b2ee-1fea92a3197d-kube-api-access-zgxv4\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:45 crc kubenswrapper[4789]: I1124 11:42:45.381270 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/584e1901-c470-4a3f-9461-7e97f4688399-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "584e1901-c470-4a3f-9461-7e97f4688399" (UID: "584e1901-c470-4a3f-9461-7e97f4688399"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:42:45 crc kubenswrapper[4789]: I1124 11:42:45.386911 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1901-c470-4a3f-9461-7e97f4688399-kube-api-access-57bvz" (OuterVolumeSpecName: "kube-api-access-57bvz") pod "584e1901-c470-4a3f-9461-7e97f4688399" (UID: "584e1901-c470-4a3f-9461-7e97f4688399"). InnerVolumeSpecName "kube-api-access-57bvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:42:45 crc kubenswrapper[4789]: I1124 11:42:45.480665 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57bvz\" (UniqueName: \"kubernetes.io/projected/584e1901-c470-4a3f-9461-7e97f4688399-kube-api-access-57bvz\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:45 crc kubenswrapper[4789]: I1124 11:42:45.480718 4789 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/584e1901-c470-4a3f-9461-7e97f4688399-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:45 crc kubenswrapper[4789]: I1124 11:42:45.480739 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/584e1901-c470-4a3f-9461-7e97f4688399-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:45 crc kubenswrapper[4789]: I1124 11:42:45.545074 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-j4swj"] Nov 24 11:42:45 crc kubenswrapper[4789]: I1124 11:42:45.558848 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-j4swj"] Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.014448 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56794997cb-fjn9v"] Nov 24 11:42:46 crc kubenswrapper[4789]: E1124 11:42:46.015196 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="584e1901-c470-4a3f-9461-7e97f4688399" containerName="route-controller-manager" Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.015216 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="584e1901-c470-4a3f-9461-7e97f4688399" containerName="route-controller-manager" Nov 24 11:42:46 crc kubenswrapper[4789]: E1124 11:42:46.015241 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4372e46e-19ca-487e-b2ee-1fea92a3197d" containerName="controller-manager" Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.015253 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="4372e46e-19ca-487e-b2ee-1fea92a3197d" containerName="controller-manager" Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.015419 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="4372e46e-19ca-487e-b2ee-1fea92a3197d" containerName="controller-manager" Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.015438 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="584e1901-c470-4a3f-9461-7e97f4688399" containerName="route-controller-manager" Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.016105 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56794997cb-fjn9v" Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.024602 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56794997cb-fjn9v"] Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.177793 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4372e46e-19ca-487e-b2ee-1fea92a3197d" path="/var/lib/kubelet/pods/4372e46e-19ca-487e-b2ee-1fea92a3197d/volumes" Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.189648 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbf0e0a4-eb0a-462c-830e-41bb66088c1c-serving-cert\") pod \"route-controller-manager-56794997cb-fjn9v\" (UID: \"cbf0e0a4-eb0a-462c-830e-41bb66088c1c\") " pod="openshift-route-controller-manager/route-controller-manager-56794997cb-fjn9v" Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.189755 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4zg8\" (UniqueName: \"kubernetes.io/projected/cbf0e0a4-eb0a-462c-830e-41bb66088c1c-kube-api-access-l4zg8\") pod \"route-controller-manager-56794997cb-fjn9v\" (UID: \"cbf0e0a4-eb0a-462c-830e-41bb66088c1c\") " pod="openshift-route-controller-manager/route-controller-manager-56794997cb-fjn9v" Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.189800 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbf0e0a4-eb0a-462c-830e-41bb66088c1c-config\") pod \"route-controller-manager-56794997cb-fjn9v\" (UID: \"cbf0e0a4-eb0a-462c-830e-41bb66088c1c\") " pod="openshift-route-controller-manager/route-controller-manager-56794997cb-fjn9v" Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.189831 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cbf0e0a4-eb0a-462c-830e-41bb66088c1c-client-ca\") pod \"route-controller-manager-56794997cb-fjn9v\" (UID: \"cbf0e0a4-eb0a-462c-830e-41bb66088c1c\") " pod="openshift-route-controller-manager/route-controller-manager-56794997cb-fjn9v" Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.225241 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5lt8v" event={"ID":"584e1901-c470-4a3f-9461-7e97f4688399","Type":"ContainerDied","Data":"b2841ce3954d8c2a635efc049ca34332f05b37b784e4511e79b020971d4a05b9"} Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.225351 4789 scope.go:117] "RemoveContainer" containerID="2d643dd176cbbbfb94a6977ed6171aa3f70d99a970c73ea87f8c4d28fb513006" Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.225622 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5lt8v" Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.254375 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5lt8v"] Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.261023 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5lt8v"] Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.291066 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbf0e0a4-eb0a-462c-830e-41bb66088c1c-serving-cert\") pod \"route-controller-manager-56794997cb-fjn9v\" (UID: \"cbf0e0a4-eb0a-462c-830e-41bb66088c1c\") " pod="openshift-route-controller-manager/route-controller-manager-56794997cb-fjn9v" Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.291250 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4zg8\" (UniqueName: \"kubernetes.io/projected/cbf0e0a4-eb0a-462c-830e-41bb66088c1c-kube-api-access-l4zg8\") pod \"route-controller-manager-56794997cb-fjn9v\" (UID: \"cbf0e0a4-eb0a-462c-830e-41bb66088c1c\") " pod="openshift-route-controller-manager/route-controller-manager-56794997cb-fjn9v" Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.291347 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbf0e0a4-eb0a-462c-830e-41bb66088c1c-config\") pod \"route-controller-manager-56794997cb-fjn9v\" (UID: \"cbf0e0a4-eb0a-462c-830e-41bb66088c1c\") " pod="openshift-route-controller-manager/route-controller-manager-56794997cb-fjn9v" Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.291446 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cbf0e0a4-eb0a-462c-830e-41bb66088c1c-client-ca\") pod \"route-controller-manager-56794997cb-fjn9v\" (UID: \"cbf0e0a4-eb0a-462c-830e-41bb66088c1c\") " pod="openshift-route-controller-manager/route-controller-manager-56794997cb-fjn9v" Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.292373 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cbf0e0a4-eb0a-462c-830e-41bb66088c1c-client-ca\") pod \"route-controller-manager-56794997cb-fjn9v\" (UID: \"cbf0e0a4-eb0a-462c-830e-41bb66088c1c\") " pod="openshift-route-controller-manager/route-controller-manager-56794997cb-fjn9v" Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.292925 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbf0e0a4-eb0a-462c-830e-41bb66088c1c-config\") pod \"route-controller-manager-56794997cb-fjn9v\" (UID: \"cbf0e0a4-eb0a-462c-830e-41bb66088c1c\") " pod="openshift-route-controller-manager/route-controller-manager-56794997cb-fjn9v" Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.296396 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbf0e0a4-eb0a-462c-830e-41bb66088c1c-serving-cert\") pod \"route-controller-manager-56794997cb-fjn9v\" (UID: \"cbf0e0a4-eb0a-462c-830e-41bb66088c1c\") " pod="openshift-route-controller-manager/route-controller-manager-56794997cb-fjn9v" Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.317646 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4zg8\" (UniqueName: \"kubernetes.io/projected/cbf0e0a4-eb0a-462c-830e-41bb66088c1c-kube-api-access-l4zg8\") pod \"route-controller-manager-56794997cb-fjn9v\" (UID: \"cbf0e0a4-eb0a-462c-830e-41bb66088c1c\") " pod="openshift-route-controller-manager/route-controller-manager-56794997cb-fjn9v" Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.331655 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56794997cb-fjn9v" Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.555443 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56794997cb-fjn9v"] Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.784312 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-585555fb59-nmcw9"] Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.785055 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-585555fb59-nmcw9" Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.787033 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.787049 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.787243 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.788102 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.788234 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.792011 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.807177 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.830335 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-585555fb59-nmcw9"] Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.896115 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74113474-ce45-4c5b-acd8-1b360036fe1a-client-ca\") pod \"controller-manager-585555fb59-nmcw9\" (UID: \"74113474-ce45-4c5b-acd8-1b360036fe1a\") " pod="openshift-controller-manager/controller-manager-585555fb59-nmcw9" Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.896183 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvm54\" (UniqueName: \"kubernetes.io/projected/74113474-ce45-4c5b-acd8-1b360036fe1a-kube-api-access-rvm54\") pod \"controller-manager-585555fb59-nmcw9\" (UID: \"74113474-ce45-4c5b-acd8-1b360036fe1a\") " pod="openshift-controller-manager/controller-manager-585555fb59-nmcw9" Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.896221 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74113474-ce45-4c5b-acd8-1b360036fe1a-proxy-ca-bundles\") pod \"controller-manager-585555fb59-nmcw9\" (UID: \"74113474-ce45-4c5b-acd8-1b360036fe1a\") " pod="openshift-controller-manager/controller-manager-585555fb59-nmcw9" Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.896250 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74113474-ce45-4c5b-acd8-1b360036fe1a-serving-cert\") pod \"controller-manager-585555fb59-nmcw9\" (UID: \"74113474-ce45-4c5b-acd8-1b360036fe1a\") " pod="openshift-controller-manager/controller-manager-585555fb59-nmcw9" Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.896275 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74113474-ce45-4c5b-acd8-1b360036fe1a-config\") pod \"controller-manager-585555fb59-nmcw9\" (UID: \"74113474-ce45-4c5b-acd8-1b360036fe1a\") " pod="openshift-controller-manager/controller-manager-585555fb59-nmcw9" Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.997547 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74113474-ce45-4c5b-acd8-1b360036fe1a-config\") pod \"controller-manager-585555fb59-nmcw9\" (UID: \"74113474-ce45-4c5b-acd8-1b360036fe1a\") " pod="openshift-controller-manager/controller-manager-585555fb59-nmcw9" Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.999033 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74113474-ce45-4c5b-acd8-1b360036fe1a-config\") pod \"controller-manager-585555fb59-nmcw9\" (UID: \"74113474-ce45-4c5b-acd8-1b360036fe1a\") " pod="openshift-controller-manager/controller-manager-585555fb59-nmcw9" Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.999103 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74113474-ce45-4c5b-acd8-1b360036fe1a-client-ca\") pod \"controller-manager-585555fb59-nmcw9\" (UID: \"74113474-ce45-4c5b-acd8-1b360036fe1a\") " pod="openshift-controller-manager/controller-manager-585555fb59-nmcw9" Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.999182 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvm54\" (UniqueName: \"kubernetes.io/projected/74113474-ce45-4c5b-acd8-1b360036fe1a-kube-api-access-rvm54\") pod \"controller-manager-585555fb59-nmcw9\" (UID: \"74113474-ce45-4c5b-acd8-1b360036fe1a\") " pod="openshift-controller-manager/controller-manager-585555fb59-nmcw9" Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.999244 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74113474-ce45-4c5b-acd8-1b360036fe1a-proxy-ca-bundles\") pod \"controller-manager-585555fb59-nmcw9\" (UID: \"74113474-ce45-4c5b-acd8-1b360036fe1a\") " pod="openshift-controller-manager/controller-manager-585555fb59-nmcw9" Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.999575 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74113474-ce45-4c5b-acd8-1b360036fe1a-serving-cert\") pod \"controller-manager-585555fb59-nmcw9\" (UID: \"74113474-ce45-4c5b-acd8-1b360036fe1a\") " pod="openshift-controller-manager/controller-manager-585555fb59-nmcw9" Nov 24 11:42:46 crc kubenswrapper[4789]: I1124 11:42:46.999951 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74113474-ce45-4c5b-acd8-1b360036fe1a-client-ca\") pod \"controller-manager-585555fb59-nmcw9\" (UID: \"74113474-ce45-4c5b-acd8-1b360036fe1a\") " pod="openshift-controller-manager/controller-manager-585555fb59-nmcw9" Nov 24 11:42:47 crc kubenswrapper[4789]: I1124 11:42:47.000714 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74113474-ce45-4c5b-acd8-1b360036fe1a-proxy-ca-bundles\") pod \"controller-manager-585555fb59-nmcw9\" (UID: \"74113474-ce45-4c5b-acd8-1b360036fe1a\") " pod="openshift-controller-manager/controller-manager-585555fb59-nmcw9" Nov 24 11:42:47 crc kubenswrapper[4789]: I1124 11:42:47.003920 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74113474-ce45-4c5b-acd8-1b360036fe1a-serving-cert\") pod \"controller-manager-585555fb59-nmcw9\" (UID: \"74113474-ce45-4c5b-acd8-1b360036fe1a\") " pod="openshift-controller-manager/controller-manager-585555fb59-nmcw9" Nov 24 11:42:47 crc kubenswrapper[4789]: I1124 11:42:47.031244 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvm54\" (UniqueName: \"kubernetes.io/projected/74113474-ce45-4c5b-acd8-1b360036fe1a-kube-api-access-rvm54\") pod \"controller-manager-585555fb59-nmcw9\" (UID: \"74113474-ce45-4c5b-acd8-1b360036fe1a\") " pod="openshift-controller-manager/controller-manager-585555fb59-nmcw9" Nov 24 11:42:47 crc kubenswrapper[4789]: I1124 11:42:47.098238 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-585555fb59-nmcw9" Nov 24 11:42:47 crc kubenswrapper[4789]: I1124 11:42:47.240129 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56794997cb-fjn9v" event={"ID":"cbf0e0a4-eb0a-462c-830e-41bb66088c1c","Type":"ContainerStarted","Data":"7aa767dba2b758a4a7b6e4126423b988cb03b50269a169dc63e2b5a1baf0566a"} Nov 24 11:42:47 crc kubenswrapper[4789]: I1124 11:42:47.240177 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56794997cb-fjn9v" event={"ID":"cbf0e0a4-eb0a-462c-830e-41bb66088c1c","Type":"ContainerStarted","Data":"e4e3028e13d4cde159155312ee5de10071a82d6a02050c96b1fa0456e6f4680c"} Nov 24 11:42:47 crc kubenswrapper[4789]: I1124 11:42:47.240501 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-56794997cb-fjn9v" Nov 24 11:42:47 crc kubenswrapper[4789]: I1124 11:42:47.247605 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-56794997cb-fjn9v" Nov 24 11:42:47 crc kubenswrapper[4789]: I1124 11:42:47.267413 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-56794997cb-fjn9v" podStartSLOduration=2.267395686 podStartE2EDuration="2.267395686s" podCreationTimestamp="2025-11-24 11:42:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:42:47.264531255 +0000 UTC m=+749.847002634" watchObservedRunningTime="2025-11-24 11:42:47.267395686 +0000 UTC m=+749.849867065" Nov 24 11:42:47 crc kubenswrapper[4789]: I1124 11:42:47.374648 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-585555fb59-nmcw9"] Nov 24 11:42:47 crc kubenswrapper[4789]: W1124 11:42:47.378781 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74113474_ce45_4c5b_acd8_1b360036fe1a.slice/crio-18d4941e513c7824b78e9d1c9d4d38414daab78e2570aa73251650ec61e9197b WatchSource:0}: Error finding container 18d4941e513c7824b78e9d1c9d4d38414daab78e2570aa73251650ec61e9197b: Status 404 returned error can't find the container with id 18d4941e513c7824b78e9d1c9d4d38414daab78e2570aa73251650ec61e9197b Nov 24 11:42:48 crc kubenswrapper[4789]: I1124 11:42:48.183171 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1901-c470-4a3f-9461-7e97f4688399" path="/var/lib/kubelet/pods/584e1901-c470-4a3f-9461-7e97f4688399/volumes" Nov 24 11:42:48 crc kubenswrapper[4789]: I1124 11:42:48.251970 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-585555fb59-nmcw9" event={"ID":"74113474-ce45-4c5b-acd8-1b360036fe1a","Type":"ContainerStarted","Data":"758fb1247b72ed45fafd18dd2574cf0335315c8d79a10a177907c2c1c02956ec"} Nov 24 11:42:48 crc kubenswrapper[4789]: I1124 11:42:48.252011 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-585555fb59-nmcw9" event={"ID":"74113474-ce45-4c5b-acd8-1b360036fe1a","Type":"ContainerStarted","Data":"18d4941e513c7824b78e9d1c9d4d38414daab78e2570aa73251650ec61e9197b"} Nov 24 11:42:48 crc kubenswrapper[4789]: I1124 11:42:48.252290 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-585555fb59-nmcw9" Nov 24 11:42:48 crc kubenswrapper[4789]: I1124 11:42:48.257352 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-585555fb59-nmcw9" Nov 24 11:42:48 crc kubenswrapper[4789]: I1124 11:42:48.275238 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-585555fb59-nmcw9" podStartSLOduration=4.275215463 podStartE2EDuration="4.275215463s" podCreationTimestamp="2025-11-24 11:42:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:42:48.272964879 +0000 UTC m=+750.855436278" watchObservedRunningTime="2025-11-24 11:42:48.275215463 +0000 UTC m=+750.857686862" Nov 24 11:42:48 crc kubenswrapper[4789]: I1124 11:42:48.897634 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5c78669894-4cs4c" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.615224 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-hvbfg"] Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.617330 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-hvbfg" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.619016 4789 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.619247 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.623869 4789 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-8wr2w" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.630357 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-bzw25"] Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.631215 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-bzw25" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.632700 4789 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.671092 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-bzw25"] Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.737259 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/937c8174-492c-4125-9fa3-0f62b450e1e3-reloader\") pod \"frr-k8s-hvbfg\" (UID: \"937c8174-492c-4125-9fa3-0f62b450e1e3\") " pod="metallb-system/frr-k8s-hvbfg" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.737331 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb2d4\" (UniqueName: \"kubernetes.io/projected/937c8174-492c-4125-9fa3-0f62b450e1e3-kube-api-access-wb2d4\") pod \"frr-k8s-hvbfg\" (UID: \"937c8174-492c-4125-9fa3-0f62b450e1e3\") " pod="metallb-system/frr-k8s-hvbfg" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.737354 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/937c8174-492c-4125-9fa3-0f62b450e1e3-metrics-certs\") pod \"frr-k8s-hvbfg\" (UID: \"937c8174-492c-4125-9fa3-0f62b450e1e3\") " pod="metallb-system/frr-k8s-hvbfg" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.737369 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/937c8174-492c-4125-9fa3-0f62b450e1e3-frr-startup\") pod \"frr-k8s-hvbfg\" (UID: \"937c8174-492c-4125-9fa3-0f62b450e1e3\") " pod="metallb-system/frr-k8s-hvbfg" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.737387 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ea1421cc-29d8-43a2-898f-e12e9978b1fa-cert\") pod \"frr-k8s-webhook-server-6998585d5-bzw25\" (UID: \"ea1421cc-29d8-43a2-898f-e12e9978b1fa\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-bzw25" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.737404 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/937c8174-492c-4125-9fa3-0f62b450e1e3-frr-sockets\") pod \"frr-k8s-hvbfg\" (UID: \"937c8174-492c-4125-9fa3-0f62b450e1e3\") " pod="metallb-system/frr-k8s-hvbfg" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.737425 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/937c8174-492c-4125-9fa3-0f62b450e1e3-metrics\") pod \"frr-k8s-hvbfg\" (UID: \"937c8174-492c-4125-9fa3-0f62b450e1e3\") " pod="metallb-system/frr-k8s-hvbfg" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.737479 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm9m4\" (UniqueName: \"kubernetes.io/projected/ea1421cc-29d8-43a2-898f-e12e9978b1fa-kube-api-access-pm9m4\") pod \"frr-k8s-webhook-server-6998585d5-bzw25\" (UID: \"ea1421cc-29d8-43a2-898f-e12e9978b1fa\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-bzw25" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.737506 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/937c8174-492c-4125-9fa3-0f62b450e1e3-frr-conf\") pod \"frr-k8s-hvbfg\" (UID: \"937c8174-492c-4125-9fa3-0f62b450e1e3\") " pod="metallb-system/frr-k8s-hvbfg" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.745096 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-fbrt2"] Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.746182 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-fbrt2" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.750179 4789 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.750494 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.750493 4789 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.750656 4789 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-zx69f" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.781916 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6c7b4b5f48-trm8h"] Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.782837 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-trm8h" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.786714 4789 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.795617 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-trm8h"] Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.842568 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/937c8174-492c-4125-9fa3-0f62b450e1e3-reloader\") pod \"frr-k8s-hvbfg\" (UID: \"937c8174-492c-4125-9fa3-0f62b450e1e3\") " pod="metallb-system/frr-k8s-hvbfg" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.842616 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c6ea6339-def1-4bf8-ba76-2dce73b451c7-metallb-excludel2\") pod \"speaker-fbrt2\" (UID: \"c6ea6339-def1-4bf8-ba76-2dce73b451c7\") " pod="metallb-system/speaker-fbrt2" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.842662 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb2d4\" (UniqueName: \"kubernetes.io/projected/937c8174-492c-4125-9fa3-0f62b450e1e3-kube-api-access-wb2d4\") pod \"frr-k8s-hvbfg\" (UID: \"937c8174-492c-4125-9fa3-0f62b450e1e3\") " pod="metallb-system/frr-k8s-hvbfg" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.842691 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/937c8174-492c-4125-9fa3-0f62b450e1e3-metrics-certs\") pod \"frr-k8s-hvbfg\" (UID: \"937c8174-492c-4125-9fa3-0f62b450e1e3\") " pod="metallb-system/frr-k8s-hvbfg" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.842710 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/937c8174-492c-4125-9fa3-0f62b450e1e3-frr-startup\") pod \"frr-k8s-hvbfg\" (UID: \"937c8174-492c-4125-9fa3-0f62b450e1e3\") " pod="metallb-system/frr-k8s-hvbfg" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.842805 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c6ea6339-def1-4bf8-ba76-2dce73b451c7-memberlist\") pod \"speaker-fbrt2\" (UID: \"c6ea6339-def1-4bf8-ba76-2dce73b451c7\") " pod="metallb-system/speaker-fbrt2" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.842887 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ea1421cc-29d8-43a2-898f-e12e9978b1fa-cert\") pod \"frr-k8s-webhook-server-6998585d5-bzw25\" (UID: \"ea1421cc-29d8-43a2-898f-e12e9978b1fa\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-bzw25" Nov 24 11:42:49 crc kubenswrapper[4789]: E1124 11:42:49.842900 4789 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.842932 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/937c8174-492c-4125-9fa3-0f62b450e1e3-frr-sockets\") pod \"frr-k8s-hvbfg\" (UID: \"937c8174-492c-4125-9fa3-0f62b450e1e3\") " pod="metallb-system/frr-k8s-hvbfg" Nov 24 11:42:49 crc kubenswrapper[4789]: E1124 11:42:49.842982 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/937c8174-492c-4125-9fa3-0f62b450e1e3-metrics-certs podName:937c8174-492c-4125-9fa3-0f62b450e1e3 nodeName:}" failed. No retries permitted until 2025-11-24 11:42:50.342963686 +0000 UTC m=+752.925435065 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/937c8174-492c-4125-9fa3-0f62b450e1e3-metrics-certs") pod "frr-k8s-hvbfg" (UID: "937c8174-492c-4125-9fa3-0f62b450e1e3") : secret "frr-k8s-certs-secret" not found Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.843009 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c6ea6339-def1-4bf8-ba76-2dce73b451c7-metrics-certs\") pod \"speaker-fbrt2\" (UID: \"c6ea6339-def1-4bf8-ba76-2dce73b451c7\") " pod="metallb-system/speaker-fbrt2" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.843082 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/937c8174-492c-4125-9fa3-0f62b450e1e3-metrics\") pod \"frr-k8s-hvbfg\" (UID: \"937c8174-492c-4125-9fa3-0f62b450e1e3\") " pod="metallb-system/frr-k8s-hvbfg" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.843128 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5mgg\" (UniqueName: \"kubernetes.io/projected/c6ea6339-def1-4bf8-ba76-2dce73b451c7-kube-api-access-n5mgg\") pod \"speaker-fbrt2\" (UID: \"c6ea6339-def1-4bf8-ba76-2dce73b451c7\") " pod="metallb-system/speaker-fbrt2" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.843194 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm9m4\" (UniqueName: \"kubernetes.io/projected/ea1421cc-29d8-43a2-898f-e12e9978b1fa-kube-api-access-pm9m4\") pod \"frr-k8s-webhook-server-6998585d5-bzw25\" (UID: \"ea1421cc-29d8-43a2-898f-e12e9978b1fa\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-bzw25" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.843232 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/937c8174-492c-4125-9fa3-0f62b450e1e3-frr-conf\") pod \"frr-k8s-hvbfg\" (UID: \"937c8174-492c-4125-9fa3-0f62b450e1e3\") " pod="metallb-system/frr-k8s-hvbfg" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.843416 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/937c8174-492c-4125-9fa3-0f62b450e1e3-frr-sockets\") pod \"frr-k8s-hvbfg\" (UID: \"937c8174-492c-4125-9fa3-0f62b450e1e3\") " pod="metallb-system/frr-k8s-hvbfg" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.843588 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/937c8174-492c-4125-9fa3-0f62b450e1e3-metrics\") pod \"frr-k8s-hvbfg\" (UID: \"937c8174-492c-4125-9fa3-0f62b450e1e3\") " pod="metallb-system/frr-k8s-hvbfg" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.843714 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/937c8174-492c-4125-9fa3-0f62b450e1e3-reloader\") pod \"frr-k8s-hvbfg\" (UID: \"937c8174-492c-4125-9fa3-0f62b450e1e3\") " pod="metallb-system/frr-k8s-hvbfg" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.843763 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/937c8174-492c-4125-9fa3-0f62b450e1e3-frr-conf\") pod \"frr-k8s-hvbfg\" (UID: \"937c8174-492c-4125-9fa3-0f62b450e1e3\") " pod="metallb-system/frr-k8s-hvbfg" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.843874 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/937c8174-492c-4125-9fa3-0f62b450e1e3-frr-startup\") pod \"frr-k8s-hvbfg\" (UID: \"937c8174-492c-4125-9fa3-0f62b450e1e3\") " pod="metallb-system/frr-k8s-hvbfg" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.851184 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ea1421cc-29d8-43a2-898f-e12e9978b1fa-cert\") pod \"frr-k8s-webhook-server-6998585d5-bzw25\" (UID: \"ea1421cc-29d8-43a2-898f-e12e9978b1fa\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-bzw25" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.859642 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb2d4\" (UniqueName: \"kubernetes.io/projected/937c8174-492c-4125-9fa3-0f62b450e1e3-kube-api-access-wb2d4\") pod \"frr-k8s-hvbfg\" (UID: \"937c8174-492c-4125-9fa3-0f62b450e1e3\") " pod="metallb-system/frr-k8s-hvbfg" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.861073 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm9m4\" (UniqueName: \"kubernetes.io/projected/ea1421cc-29d8-43a2-898f-e12e9978b1fa-kube-api-access-pm9m4\") pod \"frr-k8s-webhook-server-6998585d5-bzw25\" (UID: \"ea1421cc-29d8-43a2-898f-e12e9978b1fa\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-bzw25" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.943570 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-bzw25" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.943828 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c6ea6339-def1-4bf8-ba76-2dce73b451c7-metallb-excludel2\") pod \"speaker-fbrt2\" (UID: \"c6ea6339-def1-4bf8-ba76-2dce73b451c7\") " pod="metallb-system/speaker-fbrt2" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.943879 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/de5e9675-d7e9-4a4f-ba3d-000b5cabd4f7-cert\") pod \"controller-6c7b4b5f48-trm8h\" (UID: \"de5e9675-d7e9-4a4f-ba3d-000b5cabd4f7\") " pod="metallb-system/controller-6c7b4b5f48-trm8h" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.943909 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c6ea6339-def1-4bf8-ba76-2dce73b451c7-memberlist\") pod \"speaker-fbrt2\" (UID: \"c6ea6339-def1-4bf8-ba76-2dce73b451c7\") " pod="metallb-system/speaker-fbrt2" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.943931 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c6ea6339-def1-4bf8-ba76-2dce73b451c7-metrics-certs\") pod \"speaker-fbrt2\" (UID: \"c6ea6339-def1-4bf8-ba76-2dce73b451c7\") " pod="metallb-system/speaker-fbrt2" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.943953 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5mgg\" (UniqueName: \"kubernetes.io/projected/c6ea6339-def1-4bf8-ba76-2dce73b451c7-kube-api-access-n5mgg\") pod \"speaker-fbrt2\" (UID: \"c6ea6339-def1-4bf8-ba76-2dce73b451c7\") " pod="metallb-system/speaker-fbrt2" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.943979 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de5e9675-d7e9-4a4f-ba3d-000b5cabd4f7-metrics-certs\") pod \"controller-6c7b4b5f48-trm8h\" (UID: \"de5e9675-d7e9-4a4f-ba3d-000b5cabd4f7\") " pod="metallb-system/controller-6c7b4b5f48-trm8h" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.944003 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmm24\" (UniqueName: \"kubernetes.io/projected/de5e9675-d7e9-4a4f-ba3d-000b5cabd4f7-kube-api-access-rmm24\") pod \"controller-6c7b4b5f48-trm8h\" (UID: \"de5e9675-d7e9-4a4f-ba3d-000b5cabd4f7\") " pod="metallb-system/controller-6c7b4b5f48-trm8h" Nov 24 11:42:49 crc kubenswrapper[4789]: E1124 11:42:49.944090 4789 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 24 11:42:49 crc kubenswrapper[4789]: E1124 11:42:49.944124 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c6ea6339-def1-4bf8-ba76-2dce73b451c7-memberlist podName:c6ea6339-def1-4bf8-ba76-2dce73b451c7 nodeName:}" failed. No retries permitted until 2025-11-24 11:42:50.444111021 +0000 UTC m=+753.026582400 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c6ea6339-def1-4bf8-ba76-2dce73b451c7-memberlist") pod "speaker-fbrt2" (UID: "c6ea6339-def1-4bf8-ba76-2dce73b451c7") : secret "metallb-memberlist" not found Nov 24 11:42:49 crc kubenswrapper[4789]: E1124 11:42:49.944171 4789 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Nov 24 11:42:49 crc kubenswrapper[4789]: E1124 11:42:49.944193 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c6ea6339-def1-4bf8-ba76-2dce73b451c7-metrics-certs podName:c6ea6339-def1-4bf8-ba76-2dce73b451c7 nodeName:}" failed. No retries permitted until 2025-11-24 11:42:50.444185843 +0000 UTC m=+753.026657222 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c6ea6339-def1-4bf8-ba76-2dce73b451c7-metrics-certs") pod "speaker-fbrt2" (UID: "c6ea6339-def1-4bf8-ba76-2dce73b451c7") : secret "speaker-certs-secret" not found Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.944447 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c6ea6339-def1-4bf8-ba76-2dce73b451c7-metallb-excludel2\") pod \"speaker-fbrt2\" (UID: \"c6ea6339-def1-4bf8-ba76-2dce73b451c7\") " pod="metallb-system/speaker-fbrt2" Nov 24 11:42:49 crc kubenswrapper[4789]: I1124 11:42:49.985783 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5mgg\" (UniqueName: \"kubernetes.io/projected/c6ea6339-def1-4bf8-ba76-2dce73b451c7-kube-api-access-n5mgg\") pod \"speaker-fbrt2\" (UID: \"c6ea6339-def1-4bf8-ba76-2dce73b451c7\") " pod="metallb-system/speaker-fbrt2" Nov 24 11:42:50 crc kubenswrapper[4789]: I1124 11:42:50.044649 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmm24\" (UniqueName: \"kubernetes.io/projected/de5e9675-d7e9-4a4f-ba3d-000b5cabd4f7-kube-api-access-rmm24\") pod \"controller-6c7b4b5f48-trm8h\" (UID: \"de5e9675-d7e9-4a4f-ba3d-000b5cabd4f7\") " pod="metallb-system/controller-6c7b4b5f48-trm8h" Nov 24 11:42:50 crc kubenswrapper[4789]: I1124 11:42:50.044723 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/de5e9675-d7e9-4a4f-ba3d-000b5cabd4f7-cert\") pod \"controller-6c7b4b5f48-trm8h\" (UID: \"de5e9675-d7e9-4a4f-ba3d-000b5cabd4f7\") " pod="metallb-system/controller-6c7b4b5f48-trm8h" Nov 24 11:42:50 crc kubenswrapper[4789]: I1124 11:42:50.044830 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de5e9675-d7e9-4a4f-ba3d-000b5cabd4f7-metrics-certs\") pod \"controller-6c7b4b5f48-trm8h\" (UID: \"de5e9675-d7e9-4a4f-ba3d-000b5cabd4f7\") " pod="metallb-system/controller-6c7b4b5f48-trm8h" Nov 24 11:42:50 crc kubenswrapper[4789]: I1124 11:42:50.046577 4789 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 24 11:42:50 crc kubenswrapper[4789]: I1124 11:42:50.049061 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de5e9675-d7e9-4a4f-ba3d-000b5cabd4f7-metrics-certs\") pod \"controller-6c7b4b5f48-trm8h\" (UID: \"de5e9675-d7e9-4a4f-ba3d-000b5cabd4f7\") " pod="metallb-system/controller-6c7b4b5f48-trm8h" Nov 24 11:42:50 crc kubenswrapper[4789]: I1124 11:42:50.059201 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/de5e9675-d7e9-4a4f-ba3d-000b5cabd4f7-cert\") pod \"controller-6c7b4b5f48-trm8h\" (UID: \"de5e9675-d7e9-4a4f-ba3d-000b5cabd4f7\") " pod="metallb-system/controller-6c7b4b5f48-trm8h" Nov 24 11:42:50 crc kubenswrapper[4789]: I1124 11:42:50.071130 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmm24\" (UniqueName: \"kubernetes.io/projected/de5e9675-d7e9-4a4f-ba3d-000b5cabd4f7-kube-api-access-rmm24\") pod \"controller-6c7b4b5f48-trm8h\" (UID: \"de5e9675-d7e9-4a4f-ba3d-000b5cabd4f7\") " pod="metallb-system/controller-6c7b4b5f48-trm8h" Nov 24 11:42:50 crc kubenswrapper[4789]: I1124 11:42:50.098516 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-trm8h" Nov 24 11:42:50 crc kubenswrapper[4789]: I1124 11:42:50.163562 4789 patch_prober.go:28] interesting pod/machine-config-daemon-9czvn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:42:50 crc kubenswrapper[4789]: I1124 11:42:50.163609 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:42:50 crc kubenswrapper[4789]: I1124 11:42:50.163647 4789 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" Nov 24 11:42:50 crc kubenswrapper[4789]: I1124 11:42:50.165364 4789 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8e60897d5da5e8d43be26df5c1cea722069e382de7019ee5de88fc244959bfbd"} pod="openshift-machine-config-operator/machine-config-daemon-9czvn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 11:42:50 crc kubenswrapper[4789]: I1124 11:42:50.165481 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" containerID="cri-o://8e60897d5da5e8d43be26df5c1cea722069e382de7019ee5de88fc244959bfbd" gracePeriod=600 Nov 24 11:42:50 crc kubenswrapper[4789]: I1124 11:42:50.350558 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/937c8174-492c-4125-9fa3-0f62b450e1e3-metrics-certs\") pod \"frr-k8s-hvbfg\" (UID: \"937c8174-492c-4125-9fa3-0f62b450e1e3\") " pod="metallb-system/frr-k8s-hvbfg" Nov 24 11:42:50 crc kubenswrapper[4789]: I1124 11:42:50.359047 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/937c8174-492c-4125-9fa3-0f62b450e1e3-metrics-certs\") pod \"frr-k8s-hvbfg\" (UID: \"937c8174-492c-4125-9fa3-0f62b450e1e3\") " pod="metallb-system/frr-k8s-hvbfg" Nov 24 11:42:50 crc kubenswrapper[4789]: I1124 11:42:50.453191 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c6ea6339-def1-4bf8-ba76-2dce73b451c7-memberlist\") pod \"speaker-fbrt2\" (UID: \"c6ea6339-def1-4bf8-ba76-2dce73b451c7\") " pod="metallb-system/speaker-fbrt2" Nov 24 11:42:50 crc kubenswrapper[4789]: I1124 11:42:50.453300 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c6ea6339-def1-4bf8-ba76-2dce73b451c7-metrics-certs\") pod \"speaker-fbrt2\" (UID: \"c6ea6339-def1-4bf8-ba76-2dce73b451c7\") " pod="metallb-system/speaker-fbrt2" Nov 24 11:42:50 crc kubenswrapper[4789]: E1124 11:42:50.454782 4789 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 24 11:42:50 crc kubenswrapper[4789]: E1124 11:42:50.454923 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c6ea6339-def1-4bf8-ba76-2dce73b451c7-memberlist podName:c6ea6339-def1-4bf8-ba76-2dce73b451c7 nodeName:}" failed. No retries permitted until 2025-11-24 11:42:51.454894019 +0000 UTC m=+754.037365398 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c6ea6339-def1-4bf8-ba76-2dce73b451c7-memberlist") pod "speaker-fbrt2" (UID: "c6ea6339-def1-4bf8-ba76-2dce73b451c7") : secret "metallb-memberlist" not found Nov 24 11:42:50 crc kubenswrapper[4789]: I1124 11:42:50.459404 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-bzw25"] Nov 24 11:42:50 crc kubenswrapper[4789]: I1124 11:42:50.471175 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c6ea6339-def1-4bf8-ba76-2dce73b451c7-metrics-certs\") pod \"speaker-fbrt2\" (UID: \"c6ea6339-def1-4bf8-ba76-2dce73b451c7\") " pod="metallb-system/speaker-fbrt2" Nov 24 11:42:50 crc kubenswrapper[4789]: I1124 11:42:50.533285 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-hvbfg" Nov 24 11:42:50 crc kubenswrapper[4789]: I1124 11:42:50.614367 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-trm8h"] Nov 24 11:42:50 crc kubenswrapper[4789]: W1124 11:42:50.621544 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde5e9675_d7e9_4a4f_ba3d_000b5cabd4f7.slice/crio-23f6b921ee24a9764c65804aea283fd5b8404963ab52a89fe4bdaa78427d7cb0 WatchSource:0}: Error finding container 23f6b921ee24a9764c65804aea283fd5b8404963ab52a89fe4bdaa78427d7cb0: Status 404 returned error can't find the container with id 23f6b921ee24a9764c65804aea283fd5b8404963ab52a89fe4bdaa78427d7cb0 Nov 24 11:42:51 crc kubenswrapper[4789]: I1124 11:42:51.276380 4789 generic.go:334] "Generic (PLEG): container finished" podID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerID="8e60897d5da5e8d43be26df5c1cea722069e382de7019ee5de88fc244959bfbd" exitCode=0 Nov 24 11:42:51 crc kubenswrapper[4789]: I1124 11:42:51.276759 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" event={"ID":"30c4a832-f0e4-481b-a474-3ecea86049f6","Type":"ContainerDied","Data":"8e60897d5da5e8d43be26df5c1cea722069e382de7019ee5de88fc244959bfbd"} Nov 24 11:42:51 crc kubenswrapper[4789]: I1124 11:42:51.276791 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" event={"ID":"30c4a832-f0e4-481b-a474-3ecea86049f6","Type":"ContainerStarted","Data":"4aecda2250b38282b436cf65055990a602ab1ffc6d48744037d9fd3637b96bdb"} Nov 24 11:42:51 crc kubenswrapper[4789]: I1124 11:42:51.276811 4789 scope.go:117] "RemoveContainer" containerID="64e45ebae9200df335dbfb46077262c25e90b02c6e55caf8466a7e14f278b850" Nov 24 11:42:51 crc kubenswrapper[4789]: I1124 11:42:51.279774 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hvbfg" event={"ID":"937c8174-492c-4125-9fa3-0f62b450e1e3","Type":"ContainerStarted","Data":"71800e08f1cf8a04c77f6ff07ec429f66371d947089d9faa6e8821621fd0f607"} Nov 24 11:42:51 crc kubenswrapper[4789]: I1124 11:42:51.282918 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-trm8h" event={"ID":"de5e9675-d7e9-4a4f-ba3d-000b5cabd4f7","Type":"ContainerStarted","Data":"af49701473b49ddb97e470f0aa88c745c0efbf660032f83d55b676957869b9f2"} Nov 24 11:42:51 crc kubenswrapper[4789]: I1124 11:42:51.282965 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-trm8h" event={"ID":"de5e9675-d7e9-4a4f-ba3d-000b5cabd4f7","Type":"ContainerStarted","Data":"4d59df4dc6d090fd6592b558e0e2fd3ae5fe24479d355dc77fed6066d4a75a44"} Nov 24 11:42:51 crc kubenswrapper[4789]: I1124 11:42:51.282984 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-trm8h" event={"ID":"de5e9675-d7e9-4a4f-ba3d-000b5cabd4f7","Type":"ContainerStarted","Data":"23f6b921ee24a9764c65804aea283fd5b8404963ab52a89fe4bdaa78427d7cb0"} Nov 24 11:42:51 crc kubenswrapper[4789]: I1124 11:42:51.283729 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6c7b4b5f48-trm8h" Nov 24 11:42:51 crc kubenswrapper[4789]: I1124 11:42:51.285293 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-bzw25" event={"ID":"ea1421cc-29d8-43a2-898f-e12e9978b1fa","Type":"ContainerStarted","Data":"307f669c9154693cbcf2e3b11efb85f089a08a6bbf4005ec39d5cc3f2cc11670"} Nov 24 11:42:51 crc kubenswrapper[4789]: I1124 11:42:51.323191 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6c7b4b5f48-trm8h" podStartSLOduration=2.323171598 podStartE2EDuration="2.323171598s" podCreationTimestamp="2025-11-24 11:42:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:42:51.318675918 +0000 UTC m=+753.901147307" watchObservedRunningTime="2025-11-24 11:42:51.323171598 +0000 UTC m=+753.905642997" Nov 24 11:42:51 crc kubenswrapper[4789]: I1124 11:42:51.467980 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c6ea6339-def1-4bf8-ba76-2dce73b451c7-memberlist\") pod \"speaker-fbrt2\" (UID: \"c6ea6339-def1-4bf8-ba76-2dce73b451c7\") " pod="metallb-system/speaker-fbrt2" Nov 24 11:42:51 crc kubenswrapper[4789]: I1124 11:42:51.477348 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c6ea6339-def1-4bf8-ba76-2dce73b451c7-memberlist\") pod \"speaker-fbrt2\" (UID: \"c6ea6339-def1-4bf8-ba76-2dce73b451c7\") " pod="metallb-system/speaker-fbrt2" Nov 24 11:42:51 crc kubenswrapper[4789]: I1124 11:42:51.560114 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-fbrt2" Nov 24 11:42:52 crc kubenswrapper[4789]: I1124 11:42:52.302857 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-fbrt2" event={"ID":"c6ea6339-def1-4bf8-ba76-2dce73b451c7","Type":"ContainerStarted","Data":"33ef0ff6fda71917a9780992d9e4996c3e792ff714315500835d541b72f22b00"} Nov 24 11:42:52 crc kubenswrapper[4789]: I1124 11:42:52.303112 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-fbrt2" event={"ID":"c6ea6339-def1-4bf8-ba76-2dce73b451c7","Type":"ContainerStarted","Data":"255ed1e615d56446a2f1d03466773fd3473c94ab60e0a0c92e417613bec8e496"} Nov 24 11:42:52 crc kubenswrapper[4789]: I1124 11:42:52.303122 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-fbrt2" event={"ID":"c6ea6339-def1-4bf8-ba76-2dce73b451c7","Type":"ContainerStarted","Data":"27c1bef31ac318d66a8d12b2179df81465c98b2bad2370db2d11b11aa5fd3d6f"} Nov 24 11:42:52 crc kubenswrapper[4789]: I1124 11:42:52.303300 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-fbrt2" Nov 24 11:42:52 crc kubenswrapper[4789]: I1124 11:42:52.330799 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-fbrt2" podStartSLOduration=3.330781771 podStartE2EDuration="3.330781771s" podCreationTimestamp="2025-11-24 11:42:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:42:52.322872736 +0000 UTC m=+754.905344115" watchObservedRunningTime="2025-11-24 11:42:52.330781771 +0000 UTC m=+754.913253150" Nov 24 11:42:54 crc kubenswrapper[4789]: I1124 11:42:54.748063 4789 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 24 11:42:59 crc kubenswrapper[4789]: I1124 11:42:59.366523 4789 generic.go:334] "Generic (PLEG): container finished" podID="937c8174-492c-4125-9fa3-0f62b450e1e3" containerID="10ae4fd15cc7e62e5f1998113a20e4e9019d4f9846fb9f00c64db164d35054b8" exitCode=0 Nov 24 11:42:59 crc kubenswrapper[4789]: I1124 11:42:59.366609 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hvbfg" event={"ID":"937c8174-492c-4125-9fa3-0f62b450e1e3","Type":"ContainerDied","Data":"10ae4fd15cc7e62e5f1998113a20e4e9019d4f9846fb9f00c64db164d35054b8"} Nov 24 11:42:59 crc kubenswrapper[4789]: I1124 11:42:59.369272 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-bzw25" event={"ID":"ea1421cc-29d8-43a2-898f-e12e9978b1fa","Type":"ContainerStarted","Data":"1b6e6dede3dc1ad065eb2a3b1d4de98d15141ff2630b1f5b4b46253d79b18d7a"} Nov 24 11:42:59 crc kubenswrapper[4789]: I1124 11:42:59.369874 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-6998585d5-bzw25" Nov 24 11:42:59 crc kubenswrapper[4789]: I1124 11:42:59.419768 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-6998585d5-bzw25" podStartSLOduration=2.6234241750000002 podStartE2EDuration="10.419748673s" podCreationTimestamp="2025-11-24 11:42:49 +0000 UTC" firstStartedPulling="2025-11-24 11:42:50.479431452 +0000 UTC m=+753.061902831" lastFinishedPulling="2025-11-24 11:42:58.27575596 +0000 UTC m=+760.858227329" observedRunningTime="2025-11-24 11:42:59.41840147 +0000 UTC m=+762.000872849" watchObservedRunningTime="2025-11-24 11:42:59.419748673 +0000 UTC m=+762.002220082" Nov 24 11:43:00 crc kubenswrapper[4789]: I1124 11:43:00.101929 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6c7b4b5f48-trm8h" Nov 24 11:43:00 crc kubenswrapper[4789]: I1124 11:43:00.376121 4789 generic.go:334] "Generic (PLEG): container finished" podID="937c8174-492c-4125-9fa3-0f62b450e1e3" containerID="55595d5a6923605cb6d2b8bb71ee6dfc7ffe8753edf8cccb1ae586bd5a5ec645" exitCode=0 Nov 24 11:43:00 crc kubenswrapper[4789]: I1124 11:43:00.376165 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hvbfg" event={"ID":"937c8174-492c-4125-9fa3-0f62b450e1e3","Type":"ContainerDied","Data":"55595d5a6923605cb6d2b8bb71ee6dfc7ffe8753edf8cccb1ae586bd5a5ec645"} Nov 24 11:43:01 crc kubenswrapper[4789]: I1124 11:43:01.386781 4789 generic.go:334] "Generic (PLEG): container finished" podID="937c8174-492c-4125-9fa3-0f62b450e1e3" containerID="ea7bda24df64c44b7c8c1e9c0aaa7cbac2ed06928c3aebaaccd633f8dcb1b399" exitCode=0 Nov 24 11:43:01 crc kubenswrapper[4789]: I1124 11:43:01.388416 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hvbfg" event={"ID":"937c8174-492c-4125-9fa3-0f62b450e1e3","Type":"ContainerDied","Data":"ea7bda24df64c44b7c8c1e9c0aaa7cbac2ed06928c3aebaaccd633f8dcb1b399"} Nov 24 11:43:01 crc kubenswrapper[4789]: I1124 11:43:01.564979 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-fbrt2" Nov 24 11:43:02 crc kubenswrapper[4789]: I1124 11:43:02.402968 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hvbfg" event={"ID":"937c8174-492c-4125-9fa3-0f62b450e1e3","Type":"ContainerStarted","Data":"e9f342b3c76dc67efbd8c57652cf11400b9a8666642fa4ee2d90fcd8162964a6"} Nov 24 11:43:02 crc kubenswrapper[4789]: I1124 11:43:02.403254 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hvbfg" event={"ID":"937c8174-492c-4125-9fa3-0f62b450e1e3","Type":"ContainerStarted","Data":"bae510c23be0b0a6daa71552724edd940ae9355a94ef530a1f1487b9d75e89a5"} Nov 24 11:43:02 crc kubenswrapper[4789]: I1124 11:43:02.403274 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hvbfg" event={"ID":"937c8174-492c-4125-9fa3-0f62b450e1e3","Type":"ContainerStarted","Data":"5107ccb1c55832850f3de8064943480b5e01f8eeda0c41ad9cdac643153d322c"} Nov 24 11:43:02 crc kubenswrapper[4789]: I1124 11:43:02.403287 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hvbfg" event={"ID":"937c8174-492c-4125-9fa3-0f62b450e1e3","Type":"ContainerStarted","Data":"897d634def46df661e5e0ed85e186b0bcc38cd4f49361fb1521e01695b76b562"} Nov 24 11:43:02 crc kubenswrapper[4789]: I1124 11:43:02.403298 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hvbfg" event={"ID":"937c8174-492c-4125-9fa3-0f62b450e1e3","Type":"ContainerStarted","Data":"b57221cdf149e2066487a5e271096a9f6e257990d7ad167b7d1968ba1fc9e0a6"} Nov 24 11:43:03 crc kubenswrapper[4789]: I1124 11:43:03.412864 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hvbfg" event={"ID":"937c8174-492c-4125-9fa3-0f62b450e1e3","Type":"ContainerStarted","Data":"99f6ada8813ffb76ba47a60b552c8f8957d4e9b7a2d97e741ccf71e0a83ee208"} Nov 24 11:43:03 crc kubenswrapper[4789]: I1124 11:43:03.413052 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-hvbfg" Nov 24 11:43:03 crc kubenswrapper[4789]: I1124 11:43:03.447531 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-hvbfg" podStartSLOduration=6.87356066 podStartE2EDuration="14.447515426s" podCreationTimestamp="2025-11-24 11:42:49 +0000 UTC" firstStartedPulling="2025-11-24 11:42:50.736205859 +0000 UTC m=+753.318677238" lastFinishedPulling="2025-11-24 11:42:58.310160625 +0000 UTC m=+760.892632004" observedRunningTime="2025-11-24 11:43:03.445121027 +0000 UTC m=+766.027592466" watchObservedRunningTime="2025-11-24 11:43:03.447515426 +0000 UTC m=+766.029986805" Nov 24 11:43:04 crc kubenswrapper[4789]: I1124 11:43:04.547129 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-6lvtg"] Nov 24 11:43:04 crc kubenswrapper[4789]: I1124 11:43:04.548262 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6lvtg" Nov 24 11:43:04 crc kubenswrapper[4789]: I1124 11:43:04.557505 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrqhn\" (UniqueName: \"kubernetes.io/projected/708fc33a-1bbd-439d-bfa3-af1de7af188e-kube-api-access-vrqhn\") pod \"openstack-operator-index-6lvtg\" (UID: \"708fc33a-1bbd-439d-bfa3-af1de7af188e\") " pod="openstack-operators/openstack-operator-index-6lvtg" Nov 24 11:43:04 crc kubenswrapper[4789]: I1124 11:43:04.558419 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 24 11:43:04 crc kubenswrapper[4789]: I1124 11:43:04.558426 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 24 11:43:04 crc kubenswrapper[4789]: I1124 11:43:04.580409 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6lvtg"] Nov 24 11:43:04 crc kubenswrapper[4789]: I1124 11:43:04.659134 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrqhn\" (UniqueName: \"kubernetes.io/projected/708fc33a-1bbd-439d-bfa3-af1de7af188e-kube-api-access-vrqhn\") pod \"openstack-operator-index-6lvtg\" (UID: \"708fc33a-1bbd-439d-bfa3-af1de7af188e\") " pod="openstack-operators/openstack-operator-index-6lvtg" Nov 24 11:43:04 crc kubenswrapper[4789]: I1124 11:43:04.679015 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrqhn\" (UniqueName: \"kubernetes.io/projected/708fc33a-1bbd-439d-bfa3-af1de7af188e-kube-api-access-vrqhn\") pod \"openstack-operator-index-6lvtg\" (UID: \"708fc33a-1bbd-439d-bfa3-af1de7af188e\") " pod="openstack-operators/openstack-operator-index-6lvtg" Nov 24 11:43:04 crc kubenswrapper[4789]: I1124 11:43:04.869663 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6lvtg" Nov 24 11:43:05 crc kubenswrapper[4789]: I1124 11:43:05.314674 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6lvtg"] Nov 24 11:43:05 crc kubenswrapper[4789]: I1124 11:43:05.432016 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6lvtg" event={"ID":"708fc33a-1bbd-439d-bfa3-af1de7af188e","Type":"ContainerStarted","Data":"c20373ad73654856547244a4ca95d79408dc3acfee0726346c91223c10f4aeee"} Nov 24 11:43:05 crc kubenswrapper[4789]: I1124 11:43:05.534506 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-hvbfg" Nov 24 11:43:05 crc kubenswrapper[4789]: I1124 11:43:05.590839 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-hvbfg" Nov 24 11:43:07 crc kubenswrapper[4789]: I1124 11:43:07.322320 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-6lvtg"] Nov 24 11:43:08 crc kubenswrapper[4789]: I1124 11:43:08.142974 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-ff2g6"] Nov 24 11:43:08 crc kubenswrapper[4789]: I1124 11:43:08.145286 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ff2g6" Nov 24 11:43:08 crc kubenswrapper[4789]: I1124 11:43:08.156147 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-gbpds" Nov 24 11:43:08 crc kubenswrapper[4789]: I1124 11:43:08.180020 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ff2g6"] Nov 24 11:43:08 crc kubenswrapper[4789]: I1124 11:43:08.307419 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxpcw\" (UniqueName: \"kubernetes.io/projected/684283c3-7c6e-4252-a66c-19cb552eeb56-kube-api-access-pxpcw\") pod \"openstack-operator-index-ff2g6\" (UID: \"684283c3-7c6e-4252-a66c-19cb552eeb56\") " pod="openstack-operators/openstack-operator-index-ff2g6" Nov 24 11:43:08 crc kubenswrapper[4789]: I1124 11:43:08.408125 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxpcw\" (UniqueName: \"kubernetes.io/projected/684283c3-7c6e-4252-a66c-19cb552eeb56-kube-api-access-pxpcw\") pod \"openstack-operator-index-ff2g6\" (UID: \"684283c3-7c6e-4252-a66c-19cb552eeb56\") " pod="openstack-operators/openstack-operator-index-ff2g6" Nov 24 11:43:08 crc kubenswrapper[4789]: I1124 11:43:08.430173 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxpcw\" (UniqueName: \"kubernetes.io/projected/684283c3-7c6e-4252-a66c-19cb552eeb56-kube-api-access-pxpcw\") pod \"openstack-operator-index-ff2g6\" (UID: \"684283c3-7c6e-4252-a66c-19cb552eeb56\") " pod="openstack-operators/openstack-operator-index-ff2g6" Nov 24 11:43:08 crc kubenswrapper[4789]: I1124 11:43:08.448759 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6lvtg" event={"ID":"708fc33a-1bbd-439d-bfa3-af1de7af188e","Type":"ContainerStarted","Data":"4957693427c06dd35713b4f64b3a97732b8f5cc54d0f81e156f9690597e586e3"} Nov 24 11:43:08 crc kubenswrapper[4789]: I1124 11:43:08.448974 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-6lvtg" podUID="708fc33a-1bbd-439d-bfa3-af1de7af188e" containerName="registry-server" containerID="cri-o://4957693427c06dd35713b4f64b3a97732b8f5cc54d0f81e156f9690597e586e3" gracePeriod=2 Nov 24 11:43:08 crc kubenswrapper[4789]: I1124 11:43:08.473142 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-6lvtg" podStartSLOduration=2.079759986 podStartE2EDuration="4.47311818s" podCreationTimestamp="2025-11-24 11:43:04 +0000 UTC" firstStartedPulling="2025-11-24 11:43:05.327695302 +0000 UTC m=+767.910166681" lastFinishedPulling="2025-11-24 11:43:07.721053496 +0000 UTC m=+770.303524875" observedRunningTime="2025-11-24 11:43:08.469264876 +0000 UTC m=+771.051736295" watchObservedRunningTime="2025-11-24 11:43:08.47311818 +0000 UTC m=+771.055589579" Nov 24 11:43:08 crc kubenswrapper[4789]: I1124 11:43:08.476614 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ff2g6" Nov 24 11:43:08 crc kubenswrapper[4789]: I1124 11:43:08.936496 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ff2g6"] Nov 24 11:43:08 crc kubenswrapper[4789]: W1124 11:43:08.946062 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod684283c3_7c6e_4252_a66c_19cb552eeb56.slice/crio-a50b4c355ef57063af30a86835f5954f50b40602e1dbbbc9f1e7f3c89cff1dcc WatchSource:0}: Error finding container a50b4c355ef57063af30a86835f5954f50b40602e1dbbbc9f1e7f3c89cff1dcc: Status 404 returned error can't find the container with id a50b4c355ef57063af30a86835f5954f50b40602e1dbbbc9f1e7f3c89cff1dcc Nov 24 11:43:09 crc kubenswrapper[4789]: I1124 11:43:09.023121 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6lvtg" Nov 24 11:43:09 crc kubenswrapper[4789]: I1124 11:43:09.218006 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrqhn\" (UniqueName: \"kubernetes.io/projected/708fc33a-1bbd-439d-bfa3-af1de7af188e-kube-api-access-vrqhn\") pod \"708fc33a-1bbd-439d-bfa3-af1de7af188e\" (UID: \"708fc33a-1bbd-439d-bfa3-af1de7af188e\") " Nov 24 11:43:09 crc kubenswrapper[4789]: I1124 11:43:09.228940 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/708fc33a-1bbd-439d-bfa3-af1de7af188e-kube-api-access-vrqhn" (OuterVolumeSpecName: "kube-api-access-vrqhn") pod "708fc33a-1bbd-439d-bfa3-af1de7af188e" (UID: "708fc33a-1bbd-439d-bfa3-af1de7af188e"). InnerVolumeSpecName "kube-api-access-vrqhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:43:09 crc kubenswrapper[4789]: I1124 11:43:09.319584 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrqhn\" (UniqueName: \"kubernetes.io/projected/708fc33a-1bbd-439d-bfa3-af1de7af188e-kube-api-access-vrqhn\") on node \"crc\" DevicePath \"\"" Nov 24 11:43:09 crc kubenswrapper[4789]: I1124 11:43:09.458440 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ff2g6" event={"ID":"684283c3-7c6e-4252-a66c-19cb552eeb56","Type":"ContainerStarted","Data":"3f6654ae49d3aeb58f930172a500c5bb80987c95c43c6a2d894283e81b458ca6"} Nov 24 11:43:09 crc kubenswrapper[4789]: I1124 11:43:09.458534 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ff2g6" event={"ID":"684283c3-7c6e-4252-a66c-19cb552eeb56","Type":"ContainerStarted","Data":"a50b4c355ef57063af30a86835f5954f50b40602e1dbbbc9f1e7f3c89cff1dcc"} Nov 24 11:43:09 crc kubenswrapper[4789]: I1124 11:43:09.463401 4789 generic.go:334] "Generic (PLEG): container finished" podID="708fc33a-1bbd-439d-bfa3-af1de7af188e" containerID="4957693427c06dd35713b4f64b3a97732b8f5cc54d0f81e156f9690597e586e3" exitCode=0 Nov 24 11:43:09 crc kubenswrapper[4789]: I1124 11:43:09.463506 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6lvtg" event={"ID":"708fc33a-1bbd-439d-bfa3-af1de7af188e","Type":"ContainerDied","Data":"4957693427c06dd35713b4f64b3a97732b8f5cc54d0f81e156f9690597e586e3"} Nov 24 11:43:09 crc kubenswrapper[4789]: I1124 11:43:09.463541 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6lvtg" event={"ID":"708fc33a-1bbd-439d-bfa3-af1de7af188e","Type":"ContainerDied","Data":"c20373ad73654856547244a4ca95d79408dc3acfee0726346c91223c10f4aeee"} Nov 24 11:43:09 crc kubenswrapper[4789]: I1124 11:43:09.463583 4789 scope.go:117] "RemoveContainer" containerID="4957693427c06dd35713b4f64b3a97732b8f5cc54d0f81e156f9690597e586e3" Nov 24 11:43:09 crc kubenswrapper[4789]: I1124 11:43:09.463764 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6lvtg" Nov 24 11:43:09 crc kubenswrapper[4789]: I1124 11:43:09.510546 4789 scope.go:117] "RemoveContainer" containerID="4957693427c06dd35713b4f64b3a97732b8f5cc54d0f81e156f9690597e586e3" Nov 24 11:43:09 crc kubenswrapper[4789]: E1124 11:43:09.511693 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4957693427c06dd35713b4f64b3a97732b8f5cc54d0f81e156f9690597e586e3\": container with ID starting with 4957693427c06dd35713b4f64b3a97732b8f5cc54d0f81e156f9690597e586e3 not found: ID does not exist" containerID="4957693427c06dd35713b4f64b3a97732b8f5cc54d0f81e156f9690597e586e3" Nov 24 11:43:09 crc kubenswrapper[4789]: I1124 11:43:09.511753 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4957693427c06dd35713b4f64b3a97732b8f5cc54d0f81e156f9690597e586e3"} err="failed to get container status \"4957693427c06dd35713b4f64b3a97732b8f5cc54d0f81e156f9690597e586e3\": rpc error: code = NotFound desc = could not find container \"4957693427c06dd35713b4f64b3a97732b8f5cc54d0f81e156f9690597e586e3\": container with ID starting with 4957693427c06dd35713b4f64b3a97732b8f5cc54d0f81e156f9690597e586e3 not found: ID does not exist" Nov 24 11:43:09 crc kubenswrapper[4789]: I1124 11:43:09.516368 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-ff2g6" podStartSLOduration=1.46926167 podStartE2EDuration="1.516340107s" podCreationTimestamp="2025-11-24 11:43:08 +0000 UTC" firstStartedPulling="2025-11-24 11:43:08.952600359 +0000 UTC m=+771.535071738" lastFinishedPulling="2025-11-24 11:43:08.999678786 +0000 UTC m=+771.582150175" observedRunningTime="2025-11-24 11:43:09.498604371 +0000 UTC m=+772.081075760" watchObservedRunningTime="2025-11-24 11:43:09.516340107 +0000 UTC m=+772.098811526" Nov 24 11:43:09 crc kubenswrapper[4789]: I1124 11:43:09.525504 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-6lvtg"] Nov 24 11:43:09 crc kubenswrapper[4789]: I1124 11:43:09.529286 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-6lvtg"] Nov 24 11:43:09 crc kubenswrapper[4789]: I1124 11:43:09.949255 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-6998585d5-bzw25" Nov 24 11:43:10 crc kubenswrapper[4789]: I1124 11:43:10.180147 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="708fc33a-1bbd-439d-bfa3-af1de7af188e" path="/var/lib/kubelet/pods/708fc33a-1bbd-439d-bfa3-af1de7af188e/volumes" Nov 24 11:43:11 crc kubenswrapper[4789]: I1124 11:43:11.532058 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-phhf5"] Nov 24 11:43:11 crc kubenswrapper[4789]: E1124 11:43:11.532988 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="708fc33a-1bbd-439d-bfa3-af1de7af188e" containerName="registry-server" Nov 24 11:43:11 crc kubenswrapper[4789]: I1124 11:43:11.533092 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="708fc33a-1bbd-439d-bfa3-af1de7af188e" containerName="registry-server" Nov 24 11:43:11 crc kubenswrapper[4789]: I1124 11:43:11.533318 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="708fc33a-1bbd-439d-bfa3-af1de7af188e" containerName="registry-server" Nov 24 11:43:11 crc kubenswrapper[4789]: I1124 11:43:11.534411 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-phhf5" Nov 24 11:43:11 crc kubenswrapper[4789]: I1124 11:43:11.555317 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-phhf5"] Nov 24 11:43:11 crc kubenswrapper[4789]: I1124 11:43:11.647136 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efde0f8e-b821-437d-9dac-2994b8321275-catalog-content\") pod \"certified-operators-phhf5\" (UID: \"efde0f8e-b821-437d-9dac-2994b8321275\") " pod="openshift-marketplace/certified-operators-phhf5" Nov 24 11:43:11 crc kubenswrapper[4789]: I1124 11:43:11.647192 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efde0f8e-b821-437d-9dac-2994b8321275-utilities\") pod \"certified-operators-phhf5\" (UID: \"efde0f8e-b821-437d-9dac-2994b8321275\") " pod="openshift-marketplace/certified-operators-phhf5" Nov 24 11:43:11 crc kubenswrapper[4789]: I1124 11:43:11.647236 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrnm5\" (UniqueName: \"kubernetes.io/projected/efde0f8e-b821-437d-9dac-2994b8321275-kube-api-access-xrnm5\") pod \"certified-operators-phhf5\" (UID: \"efde0f8e-b821-437d-9dac-2994b8321275\") " pod="openshift-marketplace/certified-operators-phhf5" Nov 24 11:43:11 crc kubenswrapper[4789]: I1124 11:43:11.748225 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efde0f8e-b821-437d-9dac-2994b8321275-catalog-content\") pod \"certified-operators-phhf5\" (UID: \"efde0f8e-b821-437d-9dac-2994b8321275\") " pod="openshift-marketplace/certified-operators-phhf5" Nov 24 11:43:11 crc kubenswrapper[4789]: I1124 11:43:11.748287 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efde0f8e-b821-437d-9dac-2994b8321275-utilities\") pod \"certified-operators-phhf5\" (UID: \"efde0f8e-b821-437d-9dac-2994b8321275\") " pod="openshift-marketplace/certified-operators-phhf5" Nov 24 11:43:11 crc kubenswrapper[4789]: I1124 11:43:11.748332 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrnm5\" (UniqueName: \"kubernetes.io/projected/efde0f8e-b821-437d-9dac-2994b8321275-kube-api-access-xrnm5\") pod \"certified-operators-phhf5\" (UID: \"efde0f8e-b821-437d-9dac-2994b8321275\") " pod="openshift-marketplace/certified-operators-phhf5" Nov 24 11:43:11 crc kubenswrapper[4789]: I1124 11:43:11.749109 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efde0f8e-b821-437d-9dac-2994b8321275-catalog-content\") pod \"certified-operators-phhf5\" (UID: \"efde0f8e-b821-437d-9dac-2994b8321275\") " pod="openshift-marketplace/certified-operators-phhf5" Nov 24 11:43:11 crc kubenswrapper[4789]: I1124 11:43:11.749331 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efde0f8e-b821-437d-9dac-2994b8321275-utilities\") pod \"certified-operators-phhf5\" (UID: \"efde0f8e-b821-437d-9dac-2994b8321275\") " pod="openshift-marketplace/certified-operators-phhf5" Nov 24 11:43:11 crc kubenswrapper[4789]: I1124 11:43:11.764782 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrnm5\" (UniqueName: \"kubernetes.io/projected/efde0f8e-b821-437d-9dac-2994b8321275-kube-api-access-xrnm5\") pod \"certified-operators-phhf5\" (UID: \"efde0f8e-b821-437d-9dac-2994b8321275\") " pod="openshift-marketplace/certified-operators-phhf5" Nov 24 11:43:11 crc kubenswrapper[4789]: I1124 11:43:11.851281 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-phhf5" Nov 24 11:43:12 crc kubenswrapper[4789]: I1124 11:43:12.311168 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-phhf5"] Nov 24 11:43:12 crc kubenswrapper[4789]: I1124 11:43:12.482563 4789 generic.go:334] "Generic (PLEG): container finished" podID="efde0f8e-b821-437d-9dac-2994b8321275" containerID="b52dcc9605666c7fe692ee91b1f3634fcff3845effbe9d8cc065a6622191c6a1" exitCode=0 Nov 24 11:43:12 crc kubenswrapper[4789]: I1124 11:43:12.482732 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phhf5" event={"ID":"efde0f8e-b821-437d-9dac-2994b8321275","Type":"ContainerDied","Data":"b52dcc9605666c7fe692ee91b1f3634fcff3845effbe9d8cc065a6622191c6a1"} Nov 24 11:43:12 crc kubenswrapper[4789]: I1124 11:43:12.482911 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phhf5" event={"ID":"efde0f8e-b821-437d-9dac-2994b8321275","Type":"ContainerStarted","Data":"997fe6b2e758c3c8b3bc8c65cae576886d19c504931d75ae1d8f7c418f47dffe"} Nov 24 11:43:14 crc kubenswrapper[4789]: I1124 11:43:14.497430 4789 generic.go:334] "Generic (PLEG): container finished" podID="efde0f8e-b821-437d-9dac-2994b8321275" containerID="a0a67b3a2bd902bbe199331cf9e0e3a8467f8f0da6d771024698e13286d7fb9c" exitCode=0 Nov 24 11:43:14 crc kubenswrapper[4789]: I1124 11:43:14.497503 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phhf5" event={"ID":"efde0f8e-b821-437d-9dac-2994b8321275","Type":"ContainerDied","Data":"a0a67b3a2bd902bbe199331cf9e0e3a8467f8f0da6d771024698e13286d7fb9c"} Nov 24 11:43:15 crc kubenswrapper[4789]: I1124 11:43:15.504606 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phhf5" event={"ID":"efde0f8e-b821-437d-9dac-2994b8321275","Type":"ContainerStarted","Data":"8ec6d7a286cc518a6a806599464cc76508f99eb16e55ca17402ea0920bf19ea1"} Nov 24 11:43:15 crc kubenswrapper[4789]: I1124 11:43:15.527230 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-phhf5" podStartSLOduration=2.109201257 podStartE2EDuration="4.527213295s" podCreationTimestamp="2025-11-24 11:43:11 +0000 UTC" firstStartedPulling="2025-11-24 11:43:12.484107282 +0000 UTC m=+775.066578661" lastFinishedPulling="2025-11-24 11:43:14.90211932 +0000 UTC m=+777.484590699" observedRunningTime="2025-11-24 11:43:15.526098938 +0000 UTC m=+778.108570337" watchObservedRunningTime="2025-11-24 11:43:15.527213295 +0000 UTC m=+778.109684674" Nov 24 11:43:18 crc kubenswrapper[4789]: I1124 11:43:18.477112 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-ff2g6" Nov 24 11:43:18 crc kubenswrapper[4789]: I1124 11:43:18.477954 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-ff2g6" Nov 24 11:43:18 crc kubenswrapper[4789]: I1124 11:43:18.511275 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-ff2g6" Nov 24 11:43:18 crc kubenswrapper[4789]: I1124 11:43:18.579077 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-ff2g6" Nov 24 11:43:20 crc kubenswrapper[4789]: I1124 11:43:20.383269 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/dcfa20349335657767e217cb0195ee063c9c2b9385e7fe3e98d7962d23f7x95"] Nov 24 11:43:20 crc kubenswrapper[4789]: I1124 11:43:20.384833 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/dcfa20349335657767e217cb0195ee063c9c2b9385e7fe3e98d7962d23f7x95" Nov 24 11:43:20 crc kubenswrapper[4789]: I1124 11:43:20.389659 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-zjblj" Nov 24 11:43:20 crc kubenswrapper[4789]: I1124 11:43:20.404697 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/dcfa20349335657767e217cb0195ee063c9c2b9385e7fe3e98d7962d23f7x95"] Nov 24 11:43:20 crc kubenswrapper[4789]: I1124 11:43:20.536594 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-hvbfg" Nov 24 11:43:20 crc kubenswrapper[4789]: I1124 11:43:20.573872 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab00850e-e7eb-4a71-ae4a-54c3b3d085f1-util\") pod \"dcfa20349335657767e217cb0195ee063c9c2b9385e7fe3e98d7962d23f7x95\" (UID: \"ab00850e-e7eb-4a71-ae4a-54c3b3d085f1\") " pod="openstack-operators/dcfa20349335657767e217cb0195ee063c9c2b9385e7fe3e98d7962d23f7x95" Nov 24 11:43:20 crc kubenswrapper[4789]: I1124 11:43:20.573943 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wgww\" (UniqueName: \"kubernetes.io/projected/ab00850e-e7eb-4a71-ae4a-54c3b3d085f1-kube-api-access-5wgww\") pod \"dcfa20349335657767e217cb0195ee063c9c2b9385e7fe3e98d7962d23f7x95\" (UID: \"ab00850e-e7eb-4a71-ae4a-54c3b3d085f1\") " pod="openstack-operators/dcfa20349335657767e217cb0195ee063c9c2b9385e7fe3e98d7962d23f7x95" Nov 24 11:43:20 crc kubenswrapper[4789]: I1124 11:43:20.574051 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab00850e-e7eb-4a71-ae4a-54c3b3d085f1-bundle\") pod \"dcfa20349335657767e217cb0195ee063c9c2b9385e7fe3e98d7962d23f7x95\" (UID: \"ab00850e-e7eb-4a71-ae4a-54c3b3d085f1\") " pod="openstack-operators/dcfa20349335657767e217cb0195ee063c9c2b9385e7fe3e98d7962d23f7x95" Nov 24 11:43:20 crc kubenswrapper[4789]: I1124 11:43:20.674714 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab00850e-e7eb-4a71-ae4a-54c3b3d085f1-bundle\") pod \"dcfa20349335657767e217cb0195ee063c9c2b9385e7fe3e98d7962d23f7x95\" (UID: \"ab00850e-e7eb-4a71-ae4a-54c3b3d085f1\") " pod="openstack-operators/dcfa20349335657767e217cb0195ee063c9c2b9385e7fe3e98d7962d23f7x95" Nov 24 11:43:20 crc kubenswrapper[4789]: I1124 11:43:20.674776 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab00850e-e7eb-4a71-ae4a-54c3b3d085f1-util\") pod \"dcfa20349335657767e217cb0195ee063c9c2b9385e7fe3e98d7962d23f7x95\" (UID: \"ab00850e-e7eb-4a71-ae4a-54c3b3d085f1\") " pod="openstack-operators/dcfa20349335657767e217cb0195ee063c9c2b9385e7fe3e98d7962d23f7x95" Nov 24 11:43:20 crc kubenswrapper[4789]: I1124 11:43:20.674826 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wgww\" (UniqueName: \"kubernetes.io/projected/ab00850e-e7eb-4a71-ae4a-54c3b3d085f1-kube-api-access-5wgww\") pod \"dcfa20349335657767e217cb0195ee063c9c2b9385e7fe3e98d7962d23f7x95\" (UID: \"ab00850e-e7eb-4a71-ae4a-54c3b3d085f1\") " pod="openstack-operators/dcfa20349335657767e217cb0195ee063c9c2b9385e7fe3e98d7962d23f7x95" Nov 24 11:43:20 crc kubenswrapper[4789]: I1124 11:43:20.675254 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab00850e-e7eb-4a71-ae4a-54c3b3d085f1-bundle\") pod \"dcfa20349335657767e217cb0195ee063c9c2b9385e7fe3e98d7962d23f7x95\" (UID: \"ab00850e-e7eb-4a71-ae4a-54c3b3d085f1\") " pod="openstack-operators/dcfa20349335657767e217cb0195ee063c9c2b9385e7fe3e98d7962d23f7x95" Nov 24 11:43:20 crc kubenswrapper[4789]: I1124 11:43:20.675963 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab00850e-e7eb-4a71-ae4a-54c3b3d085f1-util\") pod \"dcfa20349335657767e217cb0195ee063c9c2b9385e7fe3e98d7962d23f7x95\" (UID: \"ab00850e-e7eb-4a71-ae4a-54c3b3d085f1\") " pod="openstack-operators/dcfa20349335657767e217cb0195ee063c9c2b9385e7fe3e98d7962d23f7x95" Nov 24 11:43:20 crc kubenswrapper[4789]: I1124 11:43:20.711168 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wgww\" (UniqueName: \"kubernetes.io/projected/ab00850e-e7eb-4a71-ae4a-54c3b3d085f1-kube-api-access-5wgww\") pod \"dcfa20349335657767e217cb0195ee063c9c2b9385e7fe3e98d7962d23f7x95\" (UID: \"ab00850e-e7eb-4a71-ae4a-54c3b3d085f1\") " pod="openstack-operators/dcfa20349335657767e217cb0195ee063c9c2b9385e7fe3e98d7962d23f7x95" Nov 24 11:43:20 crc kubenswrapper[4789]: I1124 11:43:20.716991 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/dcfa20349335657767e217cb0195ee063c9c2b9385e7fe3e98d7962d23f7x95" Nov 24 11:43:21 crc kubenswrapper[4789]: I1124 11:43:21.193159 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/dcfa20349335657767e217cb0195ee063c9c2b9385e7fe3e98d7962d23f7x95"] Nov 24 11:43:21 crc kubenswrapper[4789]: I1124 11:43:21.568449 4789 generic.go:334] "Generic (PLEG): container finished" podID="ab00850e-e7eb-4a71-ae4a-54c3b3d085f1" containerID="b827613742c0022a6def1f73015712a55a7d29b220296aeacdbb02d0142a89cd" exitCode=0 Nov 24 11:43:21 crc kubenswrapper[4789]: I1124 11:43:21.568544 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dcfa20349335657767e217cb0195ee063c9c2b9385e7fe3e98d7962d23f7x95" event={"ID":"ab00850e-e7eb-4a71-ae4a-54c3b3d085f1","Type":"ContainerDied","Data":"b827613742c0022a6def1f73015712a55a7d29b220296aeacdbb02d0142a89cd"} Nov 24 11:43:21 crc kubenswrapper[4789]: I1124 11:43:21.568625 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dcfa20349335657767e217cb0195ee063c9c2b9385e7fe3e98d7962d23f7x95" event={"ID":"ab00850e-e7eb-4a71-ae4a-54c3b3d085f1","Type":"ContainerStarted","Data":"7150b0a4fa1b5b85191669a6d8f1b5258f358eb80fbe10e09306e78c12fbe137"} Nov 24 11:43:21 crc kubenswrapper[4789]: I1124 11:43:21.852395 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-phhf5" Nov 24 11:43:21 crc kubenswrapper[4789]: I1124 11:43:21.852500 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-phhf5" Nov 24 11:43:21 crc kubenswrapper[4789]: I1124 11:43:21.931381 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-phhf5" Nov 24 11:43:22 crc kubenswrapper[4789]: I1124 11:43:22.575625 4789 generic.go:334] "Generic (PLEG): container finished" podID="ab00850e-e7eb-4a71-ae4a-54c3b3d085f1" containerID="c4379ae1f6939c53d14b577777661a64051aa2d8b370cbd514d4d22222bec1f4" exitCode=0 Nov 24 11:43:22 crc kubenswrapper[4789]: I1124 11:43:22.576393 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dcfa20349335657767e217cb0195ee063c9c2b9385e7fe3e98d7962d23f7x95" event={"ID":"ab00850e-e7eb-4a71-ae4a-54c3b3d085f1","Type":"ContainerDied","Data":"c4379ae1f6939c53d14b577777661a64051aa2d8b370cbd514d4d22222bec1f4"} Nov 24 11:43:22 crc kubenswrapper[4789]: I1124 11:43:22.633985 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-phhf5" Nov 24 11:43:23 crc kubenswrapper[4789]: I1124 11:43:23.588570 4789 generic.go:334] "Generic (PLEG): container finished" podID="ab00850e-e7eb-4a71-ae4a-54c3b3d085f1" containerID="2131a50e9b6b5ec8c1989d9bb29354edcfa394a1c7c21cfbd4128b514d2dde19" exitCode=0 Nov 24 11:43:23 crc kubenswrapper[4789]: I1124 11:43:23.588656 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dcfa20349335657767e217cb0195ee063c9c2b9385e7fe3e98d7962d23f7x95" event={"ID":"ab00850e-e7eb-4a71-ae4a-54c3b3d085f1","Type":"ContainerDied","Data":"2131a50e9b6b5ec8c1989d9bb29354edcfa394a1c7c21cfbd4128b514d2dde19"} Nov 24 11:43:24 crc kubenswrapper[4789]: I1124 11:43:24.729669 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-phhf5"] Nov 24 11:43:24 crc kubenswrapper[4789]: I1124 11:43:24.909298 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/dcfa20349335657767e217cb0195ee063c9c2b9385e7fe3e98d7962d23f7x95" Nov 24 11:43:24 crc kubenswrapper[4789]: I1124 11:43:24.970272 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wgww\" (UniqueName: \"kubernetes.io/projected/ab00850e-e7eb-4a71-ae4a-54c3b3d085f1-kube-api-access-5wgww\") pod \"ab00850e-e7eb-4a71-ae4a-54c3b3d085f1\" (UID: \"ab00850e-e7eb-4a71-ae4a-54c3b3d085f1\") " Nov 24 11:43:24 crc kubenswrapper[4789]: I1124 11:43:24.970337 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab00850e-e7eb-4a71-ae4a-54c3b3d085f1-bundle\") pod \"ab00850e-e7eb-4a71-ae4a-54c3b3d085f1\" (UID: \"ab00850e-e7eb-4a71-ae4a-54c3b3d085f1\") " Nov 24 11:43:24 crc kubenswrapper[4789]: I1124 11:43:24.970432 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab00850e-e7eb-4a71-ae4a-54c3b3d085f1-util\") pod \"ab00850e-e7eb-4a71-ae4a-54c3b3d085f1\" (UID: \"ab00850e-e7eb-4a71-ae4a-54c3b3d085f1\") " Nov 24 11:43:24 crc kubenswrapper[4789]: I1124 11:43:24.971036 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab00850e-e7eb-4a71-ae4a-54c3b3d085f1-bundle" (OuterVolumeSpecName: "bundle") pod "ab00850e-e7eb-4a71-ae4a-54c3b3d085f1" (UID: "ab00850e-e7eb-4a71-ae4a-54c3b3d085f1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:43:24 crc kubenswrapper[4789]: I1124 11:43:24.976406 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab00850e-e7eb-4a71-ae4a-54c3b3d085f1-kube-api-access-5wgww" (OuterVolumeSpecName: "kube-api-access-5wgww") pod "ab00850e-e7eb-4a71-ae4a-54c3b3d085f1" (UID: "ab00850e-e7eb-4a71-ae4a-54c3b3d085f1"). InnerVolumeSpecName "kube-api-access-5wgww". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:43:24 crc kubenswrapper[4789]: I1124 11:43:24.988931 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab00850e-e7eb-4a71-ae4a-54c3b3d085f1-util" (OuterVolumeSpecName: "util") pod "ab00850e-e7eb-4a71-ae4a-54c3b3d085f1" (UID: "ab00850e-e7eb-4a71-ae4a-54c3b3d085f1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:43:25 crc kubenswrapper[4789]: I1124 11:43:25.072267 4789 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab00850e-e7eb-4a71-ae4a-54c3b3d085f1-util\") on node \"crc\" DevicePath \"\"" Nov 24 11:43:25 crc kubenswrapper[4789]: I1124 11:43:25.072327 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wgww\" (UniqueName: \"kubernetes.io/projected/ab00850e-e7eb-4a71-ae4a-54c3b3d085f1-kube-api-access-5wgww\") on node \"crc\" DevicePath \"\"" Nov 24 11:43:25 crc kubenswrapper[4789]: I1124 11:43:25.072341 4789 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab00850e-e7eb-4a71-ae4a-54c3b3d085f1-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:43:25 crc kubenswrapper[4789]: I1124 11:43:25.602689 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dcfa20349335657767e217cb0195ee063c9c2b9385e7fe3e98d7962d23f7x95" event={"ID":"ab00850e-e7eb-4a71-ae4a-54c3b3d085f1","Type":"ContainerDied","Data":"7150b0a4fa1b5b85191669a6d8f1b5258f358eb80fbe10e09306e78c12fbe137"} Nov 24 11:43:25 crc kubenswrapper[4789]: I1124 11:43:25.603044 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7150b0a4fa1b5b85191669a6d8f1b5258f358eb80fbe10e09306e78c12fbe137" Nov 24 11:43:25 crc kubenswrapper[4789]: I1124 11:43:25.602824 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-phhf5" podUID="efde0f8e-b821-437d-9dac-2994b8321275" containerName="registry-server" containerID="cri-o://8ec6d7a286cc518a6a806599464cc76508f99eb16e55ca17402ea0920bf19ea1" gracePeriod=2 Nov 24 11:43:25 crc kubenswrapper[4789]: I1124 11:43:25.603404 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/dcfa20349335657767e217cb0195ee063c9c2b9385e7fe3e98d7962d23f7x95" Nov 24 11:43:25 crc kubenswrapper[4789]: I1124 11:43:25.986436 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-phhf5" Nov 24 11:43:26 crc kubenswrapper[4789]: I1124 11:43:26.083960 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efde0f8e-b821-437d-9dac-2994b8321275-catalog-content\") pod \"efde0f8e-b821-437d-9dac-2994b8321275\" (UID: \"efde0f8e-b821-437d-9dac-2994b8321275\") " Nov 24 11:43:26 crc kubenswrapper[4789]: I1124 11:43:26.084017 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efde0f8e-b821-437d-9dac-2994b8321275-utilities\") pod \"efde0f8e-b821-437d-9dac-2994b8321275\" (UID: \"efde0f8e-b821-437d-9dac-2994b8321275\") " Nov 24 11:43:26 crc kubenswrapper[4789]: I1124 11:43:26.084080 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrnm5\" (UniqueName: \"kubernetes.io/projected/efde0f8e-b821-437d-9dac-2994b8321275-kube-api-access-xrnm5\") pod \"efde0f8e-b821-437d-9dac-2994b8321275\" (UID: \"efde0f8e-b821-437d-9dac-2994b8321275\") " Nov 24 11:43:26 crc kubenswrapper[4789]: I1124 11:43:26.085313 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efde0f8e-b821-437d-9dac-2994b8321275-utilities" (OuterVolumeSpecName: "utilities") pod "efde0f8e-b821-437d-9dac-2994b8321275" (UID: "efde0f8e-b821-437d-9dac-2994b8321275"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:43:26 crc kubenswrapper[4789]: I1124 11:43:26.094612 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efde0f8e-b821-437d-9dac-2994b8321275-kube-api-access-xrnm5" (OuterVolumeSpecName: "kube-api-access-xrnm5") pod "efde0f8e-b821-437d-9dac-2994b8321275" (UID: "efde0f8e-b821-437d-9dac-2994b8321275"). InnerVolumeSpecName "kube-api-access-xrnm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:43:26 crc kubenswrapper[4789]: I1124 11:43:26.136890 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efde0f8e-b821-437d-9dac-2994b8321275-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "efde0f8e-b821-437d-9dac-2994b8321275" (UID: "efde0f8e-b821-437d-9dac-2994b8321275"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:43:26 crc kubenswrapper[4789]: I1124 11:43:26.186088 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efde0f8e-b821-437d-9dac-2994b8321275-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:43:26 crc kubenswrapper[4789]: I1124 11:43:26.186123 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efde0f8e-b821-437d-9dac-2994b8321275-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:43:26 crc kubenswrapper[4789]: I1124 11:43:26.186135 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrnm5\" (UniqueName: \"kubernetes.io/projected/efde0f8e-b821-437d-9dac-2994b8321275-kube-api-access-xrnm5\") on node \"crc\" DevicePath \"\"" Nov 24 11:43:26 crc kubenswrapper[4789]: I1124 11:43:26.615572 4789 generic.go:334] "Generic (PLEG): container finished" podID="efde0f8e-b821-437d-9dac-2994b8321275" containerID="8ec6d7a286cc518a6a806599464cc76508f99eb16e55ca17402ea0920bf19ea1" exitCode=0 Nov 24 11:43:26 crc kubenswrapper[4789]: I1124 11:43:26.615650 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phhf5" event={"ID":"efde0f8e-b821-437d-9dac-2994b8321275","Type":"ContainerDied","Data":"8ec6d7a286cc518a6a806599464cc76508f99eb16e55ca17402ea0920bf19ea1"} Nov 24 11:43:26 crc kubenswrapper[4789]: I1124 11:43:26.616707 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phhf5" event={"ID":"efde0f8e-b821-437d-9dac-2994b8321275","Type":"ContainerDied","Data":"997fe6b2e758c3c8b3bc8c65cae576886d19c504931d75ae1d8f7c418f47dffe"} Nov 24 11:43:26 crc kubenswrapper[4789]: I1124 11:43:26.616774 4789 scope.go:117] "RemoveContainer" containerID="8ec6d7a286cc518a6a806599464cc76508f99eb16e55ca17402ea0920bf19ea1" Nov 24 11:43:26 crc kubenswrapper[4789]: I1124 11:43:26.615687 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-phhf5" Nov 24 11:43:26 crc kubenswrapper[4789]: I1124 11:43:26.651403 4789 scope.go:117] "RemoveContainer" containerID="a0a67b3a2bd902bbe199331cf9e0e3a8467f8f0da6d771024698e13286d7fb9c" Nov 24 11:43:26 crc kubenswrapper[4789]: I1124 11:43:26.656075 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-phhf5"] Nov 24 11:43:26 crc kubenswrapper[4789]: I1124 11:43:26.664307 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-phhf5"] Nov 24 11:43:26 crc kubenswrapper[4789]: I1124 11:43:26.681237 4789 scope.go:117] "RemoveContainer" containerID="b52dcc9605666c7fe692ee91b1f3634fcff3845effbe9d8cc065a6622191c6a1" Nov 24 11:43:26 crc kubenswrapper[4789]: I1124 11:43:26.712091 4789 scope.go:117] "RemoveContainer" containerID="8ec6d7a286cc518a6a806599464cc76508f99eb16e55ca17402ea0920bf19ea1" Nov 24 11:43:26 crc kubenswrapper[4789]: E1124 11:43:26.714044 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ec6d7a286cc518a6a806599464cc76508f99eb16e55ca17402ea0920bf19ea1\": container with ID starting with 8ec6d7a286cc518a6a806599464cc76508f99eb16e55ca17402ea0920bf19ea1 not found: ID does not exist" containerID="8ec6d7a286cc518a6a806599464cc76508f99eb16e55ca17402ea0920bf19ea1" Nov 24 11:43:26 crc kubenswrapper[4789]: I1124 11:43:26.714099 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ec6d7a286cc518a6a806599464cc76508f99eb16e55ca17402ea0920bf19ea1"} err="failed to get container status \"8ec6d7a286cc518a6a806599464cc76508f99eb16e55ca17402ea0920bf19ea1\": rpc error: code = NotFound desc = could not find container \"8ec6d7a286cc518a6a806599464cc76508f99eb16e55ca17402ea0920bf19ea1\": container with ID starting with 8ec6d7a286cc518a6a806599464cc76508f99eb16e55ca17402ea0920bf19ea1 not found: ID does not exist" Nov 24 11:43:26 crc kubenswrapper[4789]: I1124 11:43:26.714158 4789 scope.go:117] "RemoveContainer" containerID="a0a67b3a2bd902bbe199331cf9e0e3a8467f8f0da6d771024698e13286d7fb9c" Nov 24 11:43:26 crc kubenswrapper[4789]: E1124 11:43:26.726921 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0a67b3a2bd902bbe199331cf9e0e3a8467f8f0da6d771024698e13286d7fb9c\": container with ID starting with a0a67b3a2bd902bbe199331cf9e0e3a8467f8f0da6d771024698e13286d7fb9c not found: ID does not exist" containerID="a0a67b3a2bd902bbe199331cf9e0e3a8467f8f0da6d771024698e13286d7fb9c" Nov 24 11:43:26 crc kubenswrapper[4789]: I1124 11:43:26.727025 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0a67b3a2bd902bbe199331cf9e0e3a8467f8f0da6d771024698e13286d7fb9c"} err="failed to get container status \"a0a67b3a2bd902bbe199331cf9e0e3a8467f8f0da6d771024698e13286d7fb9c\": rpc error: code = NotFound desc = could not find container \"a0a67b3a2bd902bbe199331cf9e0e3a8467f8f0da6d771024698e13286d7fb9c\": container with ID starting with a0a67b3a2bd902bbe199331cf9e0e3a8467f8f0da6d771024698e13286d7fb9c not found: ID does not exist" Nov 24 11:43:26 crc kubenswrapper[4789]: I1124 11:43:26.727107 4789 scope.go:117] "RemoveContainer" containerID="b52dcc9605666c7fe692ee91b1f3634fcff3845effbe9d8cc065a6622191c6a1" Nov 24 11:43:26 crc kubenswrapper[4789]: E1124 11:43:26.727835 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b52dcc9605666c7fe692ee91b1f3634fcff3845effbe9d8cc065a6622191c6a1\": container with ID starting with b52dcc9605666c7fe692ee91b1f3634fcff3845effbe9d8cc065a6622191c6a1 not found: ID does not exist" containerID="b52dcc9605666c7fe692ee91b1f3634fcff3845effbe9d8cc065a6622191c6a1" Nov 24 11:43:26 crc kubenswrapper[4789]: I1124 11:43:26.727872 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b52dcc9605666c7fe692ee91b1f3634fcff3845effbe9d8cc065a6622191c6a1"} err="failed to get container status \"b52dcc9605666c7fe692ee91b1f3634fcff3845effbe9d8cc065a6622191c6a1\": rpc error: code = NotFound desc = could not find container \"b52dcc9605666c7fe692ee91b1f3634fcff3845effbe9d8cc065a6622191c6a1\": container with ID starting with b52dcc9605666c7fe692ee91b1f3634fcff3845effbe9d8cc065a6622191c6a1 not found: ID does not exist" Nov 24 11:43:28 crc kubenswrapper[4789]: I1124 11:43:28.177483 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efde0f8e-b821-437d-9dac-2994b8321275" path="/var/lib/kubelet/pods/efde0f8e-b821-437d-9dac-2994b8321275/volumes" Nov 24 11:43:29 crc kubenswrapper[4789]: I1124 11:43:29.045819 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6bb74f6778-sddqf"] Nov 24 11:43:29 crc kubenswrapper[4789]: E1124 11:43:29.046925 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efde0f8e-b821-437d-9dac-2994b8321275" containerName="extract-utilities" Nov 24 11:43:29 crc kubenswrapper[4789]: I1124 11:43:29.046950 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="efde0f8e-b821-437d-9dac-2994b8321275" containerName="extract-utilities" Nov 24 11:43:29 crc kubenswrapper[4789]: E1124 11:43:29.046965 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efde0f8e-b821-437d-9dac-2994b8321275" containerName="extract-content" Nov 24 11:43:29 crc kubenswrapper[4789]: I1124 11:43:29.046973 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="efde0f8e-b821-437d-9dac-2994b8321275" containerName="extract-content" Nov 24 11:43:29 crc kubenswrapper[4789]: E1124 11:43:29.047007 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efde0f8e-b821-437d-9dac-2994b8321275" containerName="registry-server" Nov 24 11:43:29 crc kubenswrapper[4789]: I1124 11:43:29.047015 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="efde0f8e-b821-437d-9dac-2994b8321275" containerName="registry-server" Nov 24 11:43:29 crc kubenswrapper[4789]: E1124 11:43:29.047031 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab00850e-e7eb-4a71-ae4a-54c3b3d085f1" containerName="pull" Nov 24 11:43:29 crc kubenswrapper[4789]: I1124 11:43:29.047038 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab00850e-e7eb-4a71-ae4a-54c3b3d085f1" containerName="pull" Nov 24 11:43:29 crc kubenswrapper[4789]: E1124 11:43:29.047047 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab00850e-e7eb-4a71-ae4a-54c3b3d085f1" containerName="util" Nov 24 11:43:29 crc kubenswrapper[4789]: I1124 11:43:29.047054 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab00850e-e7eb-4a71-ae4a-54c3b3d085f1" containerName="util" Nov 24 11:43:29 crc kubenswrapper[4789]: E1124 11:43:29.047064 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab00850e-e7eb-4a71-ae4a-54c3b3d085f1" containerName="extract" Nov 24 11:43:29 crc kubenswrapper[4789]: I1124 11:43:29.047071 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab00850e-e7eb-4a71-ae4a-54c3b3d085f1" containerName="extract" Nov 24 11:43:29 crc kubenswrapper[4789]: I1124 11:43:29.047194 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab00850e-e7eb-4a71-ae4a-54c3b3d085f1" containerName="extract" Nov 24 11:43:29 crc kubenswrapper[4789]: I1124 11:43:29.047215 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="efde0f8e-b821-437d-9dac-2994b8321275" containerName="registry-server" Nov 24 11:43:29 crc kubenswrapper[4789]: I1124 11:43:29.048081 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-6bb74f6778-sddqf" Nov 24 11:43:29 crc kubenswrapper[4789]: I1124 11:43:29.052449 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-rzn7v" Nov 24 11:43:29 crc kubenswrapper[4789]: I1124 11:43:29.086243 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6bb74f6778-sddqf"] Nov 24 11:43:29 crc kubenswrapper[4789]: I1124 11:43:29.232730 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzwg8\" (UniqueName: \"kubernetes.io/projected/a6a8da19-ed48-499a-b951-722c2294134c-kube-api-access-fzwg8\") pod \"openstack-operator-controller-operator-6bb74f6778-sddqf\" (UID: \"a6a8da19-ed48-499a-b951-722c2294134c\") " pod="openstack-operators/openstack-operator-controller-operator-6bb74f6778-sddqf" Nov 24 11:43:29 crc kubenswrapper[4789]: I1124 11:43:29.334334 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzwg8\" (UniqueName: \"kubernetes.io/projected/a6a8da19-ed48-499a-b951-722c2294134c-kube-api-access-fzwg8\") pod \"openstack-operator-controller-operator-6bb74f6778-sddqf\" (UID: \"a6a8da19-ed48-499a-b951-722c2294134c\") " pod="openstack-operators/openstack-operator-controller-operator-6bb74f6778-sddqf" Nov 24 11:43:29 crc kubenswrapper[4789]: I1124 11:43:29.360859 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzwg8\" (UniqueName: \"kubernetes.io/projected/a6a8da19-ed48-499a-b951-722c2294134c-kube-api-access-fzwg8\") pod \"openstack-operator-controller-operator-6bb74f6778-sddqf\" (UID: \"a6a8da19-ed48-499a-b951-722c2294134c\") " pod="openstack-operators/openstack-operator-controller-operator-6bb74f6778-sddqf" Nov 24 11:43:29 crc kubenswrapper[4789]: I1124 11:43:29.364343 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-6bb74f6778-sddqf" Nov 24 11:43:29 crc kubenswrapper[4789]: I1124 11:43:29.778497 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6bb74f6778-sddqf"] Nov 24 11:43:30 crc kubenswrapper[4789]: I1124 11:43:30.646255 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6bb74f6778-sddqf" event={"ID":"a6a8da19-ed48-499a-b951-722c2294134c","Type":"ContainerStarted","Data":"ba15c3576a9ef9e5a523a150d871e3ee83a75193b74cd69d29bed554cfd514fa"} Nov 24 11:43:34 crc kubenswrapper[4789]: I1124 11:43:34.684670 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6bb74f6778-sddqf" event={"ID":"a6a8da19-ed48-499a-b951-722c2294134c","Type":"ContainerStarted","Data":"073136b037a3040b7b41362237a52b0c0f2421daee1fccf9dce76ae55c077971"} Nov 24 11:43:34 crc kubenswrapper[4789]: I1124 11:43:34.928589 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2brsz"] Nov 24 11:43:34 crc kubenswrapper[4789]: I1124 11:43:34.930153 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2brsz" Nov 24 11:43:34 crc kubenswrapper[4789]: I1124 11:43:34.942739 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2brsz"] Nov 24 11:43:35 crc kubenswrapper[4789]: I1124 11:43:35.109486 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjmfl\" (UniqueName: \"kubernetes.io/projected/d81b332f-2cfd-4e55-8a1d-abea95113389-kube-api-access-wjmfl\") pod \"redhat-operators-2brsz\" (UID: \"d81b332f-2cfd-4e55-8a1d-abea95113389\") " pod="openshift-marketplace/redhat-operators-2brsz" Nov 24 11:43:35 crc kubenswrapper[4789]: I1124 11:43:35.109549 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d81b332f-2cfd-4e55-8a1d-abea95113389-catalog-content\") pod \"redhat-operators-2brsz\" (UID: \"d81b332f-2cfd-4e55-8a1d-abea95113389\") " pod="openshift-marketplace/redhat-operators-2brsz" Nov 24 11:43:35 crc kubenswrapper[4789]: I1124 11:43:35.109595 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d81b332f-2cfd-4e55-8a1d-abea95113389-utilities\") pod \"redhat-operators-2brsz\" (UID: \"d81b332f-2cfd-4e55-8a1d-abea95113389\") " pod="openshift-marketplace/redhat-operators-2brsz" Nov 24 11:43:35 crc kubenswrapper[4789]: I1124 11:43:35.210382 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d81b332f-2cfd-4e55-8a1d-abea95113389-utilities\") pod \"redhat-operators-2brsz\" (UID: \"d81b332f-2cfd-4e55-8a1d-abea95113389\") " pod="openshift-marketplace/redhat-operators-2brsz" Nov 24 11:43:35 crc kubenswrapper[4789]: I1124 11:43:35.210488 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjmfl\" (UniqueName: \"kubernetes.io/projected/d81b332f-2cfd-4e55-8a1d-abea95113389-kube-api-access-wjmfl\") pod \"redhat-operators-2brsz\" (UID: \"d81b332f-2cfd-4e55-8a1d-abea95113389\") " pod="openshift-marketplace/redhat-operators-2brsz" Nov 24 11:43:35 crc kubenswrapper[4789]: I1124 11:43:35.210535 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d81b332f-2cfd-4e55-8a1d-abea95113389-catalog-content\") pod \"redhat-operators-2brsz\" (UID: \"d81b332f-2cfd-4e55-8a1d-abea95113389\") " pod="openshift-marketplace/redhat-operators-2brsz" Nov 24 11:43:35 crc kubenswrapper[4789]: I1124 11:43:35.211145 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d81b332f-2cfd-4e55-8a1d-abea95113389-catalog-content\") pod \"redhat-operators-2brsz\" (UID: \"d81b332f-2cfd-4e55-8a1d-abea95113389\") " pod="openshift-marketplace/redhat-operators-2brsz" Nov 24 11:43:35 crc kubenswrapper[4789]: I1124 11:43:35.211444 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d81b332f-2cfd-4e55-8a1d-abea95113389-utilities\") pod \"redhat-operators-2brsz\" (UID: \"d81b332f-2cfd-4e55-8a1d-abea95113389\") " pod="openshift-marketplace/redhat-operators-2brsz" Nov 24 11:43:35 crc kubenswrapper[4789]: I1124 11:43:35.232639 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjmfl\" (UniqueName: \"kubernetes.io/projected/d81b332f-2cfd-4e55-8a1d-abea95113389-kube-api-access-wjmfl\") pod \"redhat-operators-2brsz\" (UID: \"d81b332f-2cfd-4e55-8a1d-abea95113389\") " pod="openshift-marketplace/redhat-operators-2brsz" Nov 24 11:43:35 crc kubenswrapper[4789]: I1124 11:43:35.258943 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2brsz" Nov 24 11:43:36 crc kubenswrapper[4789]: I1124 11:43:36.628966 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2brsz"] Nov 24 11:43:36 crc kubenswrapper[4789]: I1124 11:43:36.697945 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2brsz" event={"ID":"d81b332f-2cfd-4e55-8a1d-abea95113389","Type":"ContainerStarted","Data":"4ff3d823b6523b8365cc4f592897040ca910066f0ff6b54cc43d64fdc72deb87"} Nov 24 11:43:36 crc kubenswrapper[4789]: I1124 11:43:36.699295 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6bb74f6778-sddqf" event={"ID":"a6a8da19-ed48-499a-b951-722c2294134c","Type":"ContainerStarted","Data":"9861bf9b8b90582fb29cb6e16919cdf9be4d2570daf329f6254204ee623444b5"} Nov 24 11:43:36 crc kubenswrapper[4789]: I1124 11:43:36.699508 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-6bb74f6778-sddqf" Nov 24 11:43:36 crc kubenswrapper[4789]: I1124 11:43:36.733907 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-6bb74f6778-sddqf" podStartSLOduration=1.275543888 podStartE2EDuration="7.73386637s" podCreationTimestamp="2025-11-24 11:43:29 +0000 UTC" firstStartedPulling="2025-11-24 11:43:29.783643209 +0000 UTC m=+792.366114588" lastFinishedPulling="2025-11-24 11:43:36.241965691 +0000 UTC m=+798.824437070" observedRunningTime="2025-11-24 11:43:36.731848554 +0000 UTC m=+799.314319943" watchObservedRunningTime="2025-11-24 11:43:36.73386637 +0000 UTC m=+799.316337769" Nov 24 11:43:37 crc kubenswrapper[4789]: I1124 11:43:37.706977 4789 generic.go:334] "Generic (PLEG): container finished" podID="d81b332f-2cfd-4e55-8a1d-abea95113389" containerID="76ebef0c80cdc9f2b47ef5f1613f0b509031d0ed84672d7551662b729c1af17b" exitCode=0 Nov 24 11:43:37 crc kubenswrapper[4789]: I1124 11:43:37.707048 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2brsz" event={"ID":"d81b332f-2cfd-4e55-8a1d-abea95113389","Type":"ContainerDied","Data":"76ebef0c80cdc9f2b47ef5f1613f0b509031d0ed84672d7551662b729c1af17b"} Nov 24 11:43:38 crc kubenswrapper[4789]: I1124 11:43:38.526996 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sp87j"] Nov 24 11:43:38 crc kubenswrapper[4789]: I1124 11:43:38.528520 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sp87j" Nov 24 11:43:38 crc kubenswrapper[4789]: I1124 11:43:38.541689 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sp87j"] Nov 24 11:43:38 crc kubenswrapper[4789]: I1124 11:43:38.655129 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmxl4\" (UniqueName: \"kubernetes.io/projected/adcedb49-6c66-432a-bbbd-7bd7bf03edba-kube-api-access-dmxl4\") pod \"redhat-marketplace-sp87j\" (UID: \"adcedb49-6c66-432a-bbbd-7bd7bf03edba\") " pod="openshift-marketplace/redhat-marketplace-sp87j" Nov 24 11:43:38 crc kubenswrapper[4789]: I1124 11:43:38.655379 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adcedb49-6c66-432a-bbbd-7bd7bf03edba-utilities\") pod \"redhat-marketplace-sp87j\" (UID: \"adcedb49-6c66-432a-bbbd-7bd7bf03edba\") " pod="openshift-marketplace/redhat-marketplace-sp87j" Nov 24 11:43:38 crc kubenswrapper[4789]: I1124 11:43:38.655465 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adcedb49-6c66-432a-bbbd-7bd7bf03edba-catalog-content\") pod \"redhat-marketplace-sp87j\" (UID: \"adcedb49-6c66-432a-bbbd-7bd7bf03edba\") " pod="openshift-marketplace/redhat-marketplace-sp87j" Nov 24 11:43:38 crc kubenswrapper[4789]: I1124 11:43:38.714192 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2brsz" event={"ID":"d81b332f-2cfd-4e55-8a1d-abea95113389","Type":"ContainerStarted","Data":"03480dce90f9f0aa8e2752b06fc29358b14eb461e687c18ac7590dd074a74c22"} Nov 24 11:43:38 crc kubenswrapper[4789]: I1124 11:43:38.756494 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adcedb49-6c66-432a-bbbd-7bd7bf03edba-utilities\") pod \"redhat-marketplace-sp87j\" (UID: \"adcedb49-6c66-432a-bbbd-7bd7bf03edba\") " pod="openshift-marketplace/redhat-marketplace-sp87j" Nov 24 11:43:38 crc kubenswrapper[4789]: I1124 11:43:38.756817 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adcedb49-6c66-432a-bbbd-7bd7bf03edba-catalog-content\") pod \"redhat-marketplace-sp87j\" (UID: \"adcedb49-6c66-432a-bbbd-7bd7bf03edba\") " pod="openshift-marketplace/redhat-marketplace-sp87j" Nov 24 11:43:38 crc kubenswrapper[4789]: I1124 11:43:38.757029 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmxl4\" (UniqueName: \"kubernetes.io/projected/adcedb49-6c66-432a-bbbd-7bd7bf03edba-kube-api-access-dmxl4\") pod \"redhat-marketplace-sp87j\" (UID: \"adcedb49-6c66-432a-bbbd-7bd7bf03edba\") " pod="openshift-marketplace/redhat-marketplace-sp87j" Nov 24 11:43:38 crc kubenswrapper[4789]: I1124 11:43:38.758575 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adcedb49-6c66-432a-bbbd-7bd7bf03edba-utilities\") pod \"redhat-marketplace-sp87j\" (UID: \"adcedb49-6c66-432a-bbbd-7bd7bf03edba\") " pod="openshift-marketplace/redhat-marketplace-sp87j" Nov 24 11:43:38 crc kubenswrapper[4789]: I1124 11:43:38.758720 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adcedb49-6c66-432a-bbbd-7bd7bf03edba-catalog-content\") pod \"redhat-marketplace-sp87j\" (UID: \"adcedb49-6c66-432a-bbbd-7bd7bf03edba\") " pod="openshift-marketplace/redhat-marketplace-sp87j" Nov 24 11:43:38 crc kubenswrapper[4789]: I1124 11:43:38.789571 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmxl4\" (UniqueName: \"kubernetes.io/projected/adcedb49-6c66-432a-bbbd-7bd7bf03edba-kube-api-access-dmxl4\") pod \"redhat-marketplace-sp87j\" (UID: \"adcedb49-6c66-432a-bbbd-7bd7bf03edba\") " pod="openshift-marketplace/redhat-marketplace-sp87j" Nov 24 11:43:38 crc kubenswrapper[4789]: I1124 11:43:38.846032 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sp87j" Nov 24 11:43:39 crc kubenswrapper[4789]: I1124 11:43:39.284651 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sp87j"] Nov 24 11:43:39 crc kubenswrapper[4789]: W1124 11:43:39.296239 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadcedb49_6c66_432a_bbbd_7bd7bf03edba.slice/crio-3e450a3478de39ce04dd90cf829c2f0159ecf4d2d8958100493122b7f18a52c0 WatchSource:0}: Error finding container 3e450a3478de39ce04dd90cf829c2f0159ecf4d2d8958100493122b7f18a52c0: Status 404 returned error can't find the container with id 3e450a3478de39ce04dd90cf829c2f0159ecf4d2d8958100493122b7f18a52c0 Nov 24 11:43:39 crc kubenswrapper[4789]: I1124 11:43:39.368248 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-6bb74f6778-sddqf" Nov 24 11:43:39 crc kubenswrapper[4789]: I1124 11:43:39.722003 4789 generic.go:334] "Generic (PLEG): container finished" podID="adcedb49-6c66-432a-bbbd-7bd7bf03edba" containerID="0a8ed70b3989df9818cbb4004c6d7a1d3ae5eb28d6f194bd34bc18747126d9fd" exitCode=0 Nov 24 11:43:39 crc kubenswrapper[4789]: I1124 11:43:39.722116 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sp87j" event={"ID":"adcedb49-6c66-432a-bbbd-7bd7bf03edba","Type":"ContainerDied","Data":"0a8ed70b3989df9818cbb4004c6d7a1d3ae5eb28d6f194bd34bc18747126d9fd"} Nov 24 11:43:39 crc kubenswrapper[4789]: I1124 11:43:39.722165 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sp87j" event={"ID":"adcedb49-6c66-432a-bbbd-7bd7bf03edba","Type":"ContainerStarted","Data":"3e450a3478de39ce04dd90cf829c2f0159ecf4d2d8958100493122b7f18a52c0"} Nov 24 11:43:39 crc kubenswrapper[4789]: I1124 11:43:39.724794 4789 generic.go:334] "Generic (PLEG): container finished" podID="d81b332f-2cfd-4e55-8a1d-abea95113389" containerID="03480dce90f9f0aa8e2752b06fc29358b14eb461e687c18ac7590dd074a74c22" exitCode=0 Nov 24 11:43:39 crc kubenswrapper[4789]: I1124 11:43:39.724829 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2brsz" event={"ID":"d81b332f-2cfd-4e55-8a1d-abea95113389","Type":"ContainerDied","Data":"03480dce90f9f0aa8e2752b06fc29358b14eb461e687c18ac7590dd074a74c22"} Nov 24 11:43:40 crc kubenswrapper[4789]: I1124 11:43:40.733021 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2brsz" event={"ID":"d81b332f-2cfd-4e55-8a1d-abea95113389","Type":"ContainerStarted","Data":"8c52d54908140cfcb365b6a1729a7027eb9a66bf1e7bb2a3d3c70fe2c1cdeada"} Nov 24 11:43:40 crc kubenswrapper[4789]: I1124 11:43:40.735031 4789 generic.go:334] "Generic (PLEG): container finished" podID="adcedb49-6c66-432a-bbbd-7bd7bf03edba" containerID="c078c59ad551ea68731337258cb7a4e47e8877e4777c6196511d3db6a358c2a4" exitCode=0 Nov 24 11:43:40 crc kubenswrapper[4789]: I1124 11:43:40.735072 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sp87j" event={"ID":"adcedb49-6c66-432a-bbbd-7bd7bf03edba","Type":"ContainerDied","Data":"c078c59ad551ea68731337258cb7a4e47e8877e4777c6196511d3db6a358c2a4"} Nov 24 11:43:40 crc kubenswrapper[4789]: I1124 11:43:40.772255 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2brsz" podStartSLOduration=4.395276327 podStartE2EDuration="6.772239185s" podCreationTimestamp="2025-11-24 11:43:34 +0000 UTC" firstStartedPulling="2025-11-24 11:43:37.708442356 +0000 UTC m=+800.290913775" lastFinishedPulling="2025-11-24 11:43:40.085405254 +0000 UTC m=+802.667876633" observedRunningTime="2025-11-24 11:43:40.754053636 +0000 UTC m=+803.336525015" watchObservedRunningTime="2025-11-24 11:43:40.772239185 +0000 UTC m=+803.354710564" Nov 24 11:43:41 crc kubenswrapper[4789]: I1124 11:43:41.742637 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sp87j" event={"ID":"adcedb49-6c66-432a-bbbd-7bd7bf03edba","Type":"ContainerStarted","Data":"7f194b6684e93584333ece2646632fbe2ea5cf2550a04ce546d217da99a42555"} Nov 24 11:43:41 crc kubenswrapper[4789]: I1124 11:43:41.761052 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sp87j" podStartSLOduration=2.31921772 podStartE2EDuration="3.761037369s" podCreationTimestamp="2025-11-24 11:43:38 +0000 UTC" firstStartedPulling="2025-11-24 11:43:39.723578545 +0000 UTC m=+802.306049924" lastFinishedPulling="2025-11-24 11:43:41.165398184 +0000 UTC m=+803.747869573" observedRunningTime="2025-11-24 11:43:41.757713401 +0000 UTC m=+804.340184780" watchObservedRunningTime="2025-11-24 11:43:41.761037369 +0000 UTC m=+804.343508748" Nov 24 11:43:45 crc kubenswrapper[4789]: I1124 11:43:45.259383 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2brsz" Nov 24 11:43:45 crc kubenswrapper[4789]: I1124 11:43:45.259997 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2brsz" Nov 24 11:43:46 crc kubenswrapper[4789]: I1124 11:43:46.310411 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2brsz" podUID="d81b332f-2cfd-4e55-8a1d-abea95113389" containerName="registry-server" probeResult="failure" output=< Nov 24 11:43:46 crc kubenswrapper[4789]: timeout: failed to connect service ":50051" within 1s Nov 24 11:43:46 crc kubenswrapper[4789]: > Nov 24 11:43:47 crc kubenswrapper[4789]: I1124 11:43:47.859114 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jfrbf"] Nov 24 11:43:47 crc kubenswrapper[4789]: I1124 11:43:47.860602 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jfrbf" Nov 24 11:43:47 crc kubenswrapper[4789]: I1124 11:43:47.880849 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jfrbf"] Nov 24 11:43:47 crc kubenswrapper[4789]: I1124 11:43:47.964918 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/023c49aa-b48c-4320-a70f-3d9d969fa712-utilities\") pod \"community-operators-jfrbf\" (UID: \"023c49aa-b48c-4320-a70f-3d9d969fa712\") " pod="openshift-marketplace/community-operators-jfrbf" Nov 24 11:43:47 crc kubenswrapper[4789]: I1124 11:43:47.965070 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44492\" (UniqueName: \"kubernetes.io/projected/023c49aa-b48c-4320-a70f-3d9d969fa712-kube-api-access-44492\") pod \"community-operators-jfrbf\" (UID: \"023c49aa-b48c-4320-a70f-3d9d969fa712\") " pod="openshift-marketplace/community-operators-jfrbf" Nov 24 11:43:47 crc kubenswrapper[4789]: I1124 11:43:47.965105 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/023c49aa-b48c-4320-a70f-3d9d969fa712-catalog-content\") pod \"community-operators-jfrbf\" (UID: \"023c49aa-b48c-4320-a70f-3d9d969fa712\") " pod="openshift-marketplace/community-operators-jfrbf" Nov 24 11:43:48 crc kubenswrapper[4789]: I1124 11:43:48.066532 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44492\" (UniqueName: \"kubernetes.io/projected/023c49aa-b48c-4320-a70f-3d9d969fa712-kube-api-access-44492\") pod \"community-operators-jfrbf\" (UID: \"023c49aa-b48c-4320-a70f-3d9d969fa712\") " pod="openshift-marketplace/community-operators-jfrbf" Nov 24 11:43:48 crc kubenswrapper[4789]: I1124 11:43:48.066622 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/023c49aa-b48c-4320-a70f-3d9d969fa712-catalog-content\") pod \"community-operators-jfrbf\" (UID: \"023c49aa-b48c-4320-a70f-3d9d969fa712\") " pod="openshift-marketplace/community-operators-jfrbf" Nov 24 11:43:48 crc kubenswrapper[4789]: I1124 11:43:48.066649 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/023c49aa-b48c-4320-a70f-3d9d969fa712-utilities\") pod \"community-operators-jfrbf\" (UID: \"023c49aa-b48c-4320-a70f-3d9d969fa712\") " pod="openshift-marketplace/community-operators-jfrbf" Nov 24 11:43:48 crc kubenswrapper[4789]: I1124 11:43:48.067192 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/023c49aa-b48c-4320-a70f-3d9d969fa712-utilities\") pod \"community-operators-jfrbf\" (UID: \"023c49aa-b48c-4320-a70f-3d9d969fa712\") " pod="openshift-marketplace/community-operators-jfrbf" Nov 24 11:43:48 crc kubenswrapper[4789]: I1124 11:43:48.067648 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/023c49aa-b48c-4320-a70f-3d9d969fa712-catalog-content\") pod \"community-operators-jfrbf\" (UID: \"023c49aa-b48c-4320-a70f-3d9d969fa712\") " pod="openshift-marketplace/community-operators-jfrbf" Nov 24 11:43:48 crc kubenswrapper[4789]: I1124 11:43:48.085193 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44492\" (UniqueName: \"kubernetes.io/projected/023c49aa-b48c-4320-a70f-3d9d969fa712-kube-api-access-44492\") pod \"community-operators-jfrbf\" (UID: \"023c49aa-b48c-4320-a70f-3d9d969fa712\") " pod="openshift-marketplace/community-operators-jfrbf" Nov 24 11:43:48 crc kubenswrapper[4789]: I1124 11:43:48.173951 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jfrbf" Nov 24 11:43:48 crc kubenswrapper[4789]: I1124 11:43:48.718507 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jfrbf"] Nov 24 11:43:48 crc kubenswrapper[4789]: I1124 11:43:48.787061 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jfrbf" event={"ID":"023c49aa-b48c-4320-a70f-3d9d969fa712","Type":"ContainerStarted","Data":"9e6285fa9eec5b2bfbfdb3e5cc1a7784eb45062868d325ba1c51d5cc42067bff"} Nov 24 11:43:48 crc kubenswrapper[4789]: I1124 11:43:48.846146 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sp87j" Nov 24 11:43:48 crc kubenswrapper[4789]: I1124 11:43:48.846208 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sp87j" Nov 24 11:43:48 crc kubenswrapper[4789]: I1124 11:43:48.887258 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sp87j" Nov 24 11:43:49 crc kubenswrapper[4789]: I1124 11:43:49.799619 4789 generic.go:334] "Generic (PLEG): container finished" podID="023c49aa-b48c-4320-a70f-3d9d969fa712" containerID="57779c19a9190f3adf2d2ce75b8b5aec1ebda4caed3f2a658e51f5303d6b7061" exitCode=0 Nov 24 11:43:49 crc kubenswrapper[4789]: I1124 11:43:49.799720 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jfrbf" event={"ID":"023c49aa-b48c-4320-a70f-3d9d969fa712","Type":"ContainerDied","Data":"57779c19a9190f3adf2d2ce75b8b5aec1ebda4caed3f2a658e51f5303d6b7061"} Nov 24 11:43:49 crc kubenswrapper[4789]: I1124 11:43:49.857392 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sp87j" Nov 24 11:43:51 crc kubenswrapper[4789]: I1124 11:43:51.233952 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sp87j"] Nov 24 11:43:51 crc kubenswrapper[4789]: I1124 11:43:51.812407 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sp87j" podUID="adcedb49-6c66-432a-bbbd-7bd7bf03edba" containerName="registry-server" containerID="cri-o://7f194b6684e93584333ece2646632fbe2ea5cf2550a04ce546d217da99a42555" gracePeriod=2 Nov 24 11:43:52 crc kubenswrapper[4789]: I1124 11:43:52.822845 4789 generic.go:334] "Generic (PLEG): container finished" podID="adcedb49-6c66-432a-bbbd-7bd7bf03edba" containerID="7f194b6684e93584333ece2646632fbe2ea5cf2550a04ce546d217da99a42555" exitCode=0 Nov 24 11:43:52 crc kubenswrapper[4789]: I1124 11:43:52.822946 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sp87j" event={"ID":"adcedb49-6c66-432a-bbbd-7bd7bf03edba","Type":"ContainerDied","Data":"7f194b6684e93584333ece2646632fbe2ea5cf2550a04ce546d217da99a42555"} Nov 24 11:43:53 crc kubenswrapper[4789]: I1124 11:43:53.043285 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sp87j" Nov 24 11:43:53 crc kubenswrapper[4789]: I1124 11:43:53.113946 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adcedb49-6c66-432a-bbbd-7bd7bf03edba-catalog-content\") pod \"adcedb49-6c66-432a-bbbd-7bd7bf03edba\" (UID: \"adcedb49-6c66-432a-bbbd-7bd7bf03edba\") " Nov 24 11:43:53 crc kubenswrapper[4789]: I1124 11:43:53.114038 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmxl4\" (UniqueName: \"kubernetes.io/projected/adcedb49-6c66-432a-bbbd-7bd7bf03edba-kube-api-access-dmxl4\") pod \"adcedb49-6c66-432a-bbbd-7bd7bf03edba\" (UID: \"adcedb49-6c66-432a-bbbd-7bd7bf03edba\") " Nov 24 11:43:53 crc kubenswrapper[4789]: I1124 11:43:53.114076 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adcedb49-6c66-432a-bbbd-7bd7bf03edba-utilities\") pod \"adcedb49-6c66-432a-bbbd-7bd7bf03edba\" (UID: \"adcedb49-6c66-432a-bbbd-7bd7bf03edba\") " Nov 24 11:43:53 crc kubenswrapper[4789]: I1124 11:43:53.115275 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adcedb49-6c66-432a-bbbd-7bd7bf03edba-utilities" (OuterVolumeSpecName: "utilities") pod "adcedb49-6c66-432a-bbbd-7bd7bf03edba" (UID: "adcedb49-6c66-432a-bbbd-7bd7bf03edba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:43:53 crc kubenswrapper[4789]: I1124 11:43:53.124688 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adcedb49-6c66-432a-bbbd-7bd7bf03edba-kube-api-access-dmxl4" (OuterVolumeSpecName: "kube-api-access-dmxl4") pod "adcedb49-6c66-432a-bbbd-7bd7bf03edba" (UID: "adcedb49-6c66-432a-bbbd-7bd7bf03edba"). InnerVolumeSpecName "kube-api-access-dmxl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:43:53 crc kubenswrapper[4789]: I1124 11:43:53.146567 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adcedb49-6c66-432a-bbbd-7bd7bf03edba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "adcedb49-6c66-432a-bbbd-7bd7bf03edba" (UID: "adcedb49-6c66-432a-bbbd-7bd7bf03edba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:43:53 crc kubenswrapper[4789]: I1124 11:43:53.222751 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adcedb49-6c66-432a-bbbd-7bd7bf03edba-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:43:53 crc kubenswrapper[4789]: I1124 11:43:53.222800 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adcedb49-6c66-432a-bbbd-7bd7bf03edba-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:43:53 crc kubenswrapper[4789]: I1124 11:43:53.222813 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmxl4\" (UniqueName: \"kubernetes.io/projected/adcedb49-6c66-432a-bbbd-7bd7bf03edba-kube-api-access-dmxl4\") on node \"crc\" DevicePath \"\"" Nov 24 11:43:53 crc kubenswrapper[4789]: I1124 11:43:53.838372 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sp87j" event={"ID":"adcedb49-6c66-432a-bbbd-7bd7bf03edba","Type":"ContainerDied","Data":"3e450a3478de39ce04dd90cf829c2f0159ecf4d2d8958100493122b7f18a52c0"} Nov 24 11:43:53 crc kubenswrapper[4789]: I1124 11:43:53.838432 4789 scope.go:117] "RemoveContainer" containerID="7f194b6684e93584333ece2646632fbe2ea5cf2550a04ce546d217da99a42555" Nov 24 11:43:53 crc kubenswrapper[4789]: I1124 11:43:53.838538 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sp87j" Nov 24 11:43:53 crc kubenswrapper[4789]: I1124 11:43:53.881946 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sp87j"] Nov 24 11:43:53 crc kubenswrapper[4789]: I1124 11:43:53.886436 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sp87j"] Nov 24 11:43:54 crc kubenswrapper[4789]: I1124 11:43:54.185994 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adcedb49-6c66-432a-bbbd-7bd7bf03edba" path="/var/lib/kubelet/pods/adcedb49-6c66-432a-bbbd-7bd7bf03edba/volumes" Nov 24 11:43:55 crc kubenswrapper[4789]: I1124 11:43:55.298131 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2brsz" Nov 24 11:43:55 crc kubenswrapper[4789]: I1124 11:43:55.345132 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2brsz" Nov 24 11:43:55 crc kubenswrapper[4789]: I1124 11:43:55.980358 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-75fb479bcc-4n8q6"] Nov 24 11:43:55 crc kubenswrapper[4789]: E1124 11:43:55.980660 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adcedb49-6c66-432a-bbbd-7bd7bf03edba" containerName="extract-utilities" Nov 24 11:43:55 crc kubenswrapper[4789]: I1124 11:43:55.980672 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="adcedb49-6c66-432a-bbbd-7bd7bf03edba" containerName="extract-utilities" Nov 24 11:43:55 crc kubenswrapper[4789]: E1124 11:43:55.980681 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adcedb49-6c66-432a-bbbd-7bd7bf03edba" containerName="registry-server" Nov 24 11:43:55 crc kubenswrapper[4789]: I1124 11:43:55.980688 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="adcedb49-6c66-432a-bbbd-7bd7bf03edba" containerName="registry-server" Nov 24 11:43:55 crc kubenswrapper[4789]: E1124 11:43:55.980700 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adcedb49-6c66-432a-bbbd-7bd7bf03edba" containerName="extract-content" Nov 24 11:43:55 crc kubenswrapper[4789]: I1124 11:43:55.980705 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="adcedb49-6c66-432a-bbbd-7bd7bf03edba" containerName="extract-content" Nov 24 11:43:55 crc kubenswrapper[4789]: I1124 11:43:55.980809 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="adcedb49-6c66-432a-bbbd-7bd7bf03edba" containerName="registry-server" Nov 24 11:43:55 crc kubenswrapper[4789]: I1124 11:43:55.981433 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-4n8q6" Nov 24 11:43:55 crc kubenswrapper[4789]: I1124 11:43:55.983966 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6498cbf48f-q5gj6"] Nov 24 11:43:55 crc kubenswrapper[4789]: I1124 11:43:55.984933 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-q5gj6" Nov 24 11:43:55 crc kubenswrapper[4789]: I1124 11:43:55.992338 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-ztm7p" Nov 24 11:43:55 crc kubenswrapper[4789]: I1124 11:43:55.998044 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-767ccfd65f-vcqnx"] Nov 24 11:43:55 crc kubenswrapper[4789]: I1124 11:43:55.998972 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-vcqnx" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.001107 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-j7smh" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.002557 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-g7crd" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.010399 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-75fb479bcc-4n8q6"] Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.031380 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6498cbf48f-q5gj6"] Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.031434 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-767ccfd65f-vcqnx"] Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.064778 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-7969689c84-mt9mk"] Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.065822 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7969689c84-mt9mk" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.070100 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-2xqmf" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.072331 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-56f54d6746-vrsx6"] Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.073371 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-vrsx6" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.075207 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-dkjhf" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.078127 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgz8z\" (UniqueName: \"kubernetes.io/projected/d6f07f19-826c-41c8-8861-97ffffe88f6e-kube-api-access-bgz8z\") pod \"designate-operator-controller-manager-767ccfd65f-vcqnx\" (UID: \"d6f07f19-826c-41c8-8861-97ffffe88f6e\") " pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-vcqnx" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.079433 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7jnt\" (UniqueName: \"kubernetes.io/projected/f0a7631e-95a4-4bb8-aa13-72b02c833aba-kube-api-access-j7jnt\") pod \"glance-operator-controller-manager-7969689c84-mt9mk\" (UID: \"f0a7631e-95a4-4bb8-aa13-72b02c833aba\") " pod="openstack-operators/glance-operator-controller-manager-7969689c84-mt9mk" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.079729 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxdfg\" (UniqueName: \"kubernetes.io/projected/d7389a19-508e-48aa-81f3-25fc9fd76fbf-kube-api-access-jxdfg\") pod \"cinder-operator-controller-manager-6498cbf48f-q5gj6\" (UID: \"d7389a19-508e-48aa-81f3-25fc9fd76fbf\") " pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-q5gj6" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.079825 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d885m\" (UniqueName: \"kubernetes.io/projected/0b73227d-0b7b-468c-a0c3-fefa29209aa0-kube-api-access-d885m\") pod \"barbican-operator-controller-manager-75fb479bcc-4n8q6\" (UID: \"0b73227d-0b7b-468c-a0c3-fefa29209aa0\") " pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-4n8q6" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.079940 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dqvw\" (UniqueName: \"kubernetes.io/projected/95a81c85-d5ed-49a2-a24d-1aa8f5ed1aef-kube-api-access-9dqvw\") pod \"heat-operator-controller-manager-56f54d6746-vrsx6\" (UID: \"95a81c85-d5ed-49a2-a24d-1aa8f5ed1aef\") " pod="openstack-operators/heat-operator-controller-manager-56f54d6746-vrsx6" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.113984 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7969689c84-mt9mk"] Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.126535 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-56f54d6746-vrsx6"] Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.135158 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-598f69df5d-hxrfg"] Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.136188 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-hxrfg" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.143107 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-598f69df5d-hxrfg"] Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.146536 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-6dd8864d7c-g4kfx"] Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.147491 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-g4kfx" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.153562 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-kl4g8" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.153705 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.153820 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-crqsq" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.163590 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-99b499f4-tfdds"] Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.164583 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-tfdds" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.182053 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-z9mnh" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.182735 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgz8z\" (UniqueName: \"kubernetes.io/projected/d6f07f19-826c-41c8-8861-97ffffe88f6e-kube-api-access-bgz8z\") pod \"designate-operator-controller-manager-767ccfd65f-vcqnx\" (UID: \"d6f07f19-826c-41c8-8861-97ffffe88f6e\") " pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-vcqnx" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.182770 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v8r5\" (UniqueName: \"kubernetes.io/projected/661a8eee-259e-40e5-83c5-7d5b78981eb5-kube-api-access-5v8r5\") pod \"ironic-operator-controller-manager-99b499f4-tfdds\" (UID: \"661a8eee-259e-40e5-83c5-7d5b78981eb5\") " pod="openstack-operators/ironic-operator-controller-manager-99b499f4-tfdds" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.182811 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7jnt\" (UniqueName: \"kubernetes.io/projected/f0a7631e-95a4-4bb8-aa13-72b02c833aba-kube-api-access-j7jnt\") pod \"glance-operator-controller-manager-7969689c84-mt9mk\" (UID: \"f0a7631e-95a4-4bb8-aa13-72b02c833aba\") " pod="openstack-operators/glance-operator-controller-manager-7969689c84-mt9mk" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.182848 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjdlk\" (UniqueName: \"kubernetes.io/projected/915814e7-0e49-4bec-8403-6e95d1008e72-kube-api-access-pjdlk\") pod \"infra-operator-controller-manager-6dd8864d7c-g4kfx\" (UID: \"915814e7-0e49-4bec-8403-6e95d1008e72\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-g4kfx" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.182877 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxdfg\" (UniqueName: \"kubernetes.io/projected/d7389a19-508e-48aa-81f3-25fc9fd76fbf-kube-api-access-jxdfg\") pod \"cinder-operator-controller-manager-6498cbf48f-q5gj6\" (UID: \"d7389a19-508e-48aa-81f3-25fc9fd76fbf\") " pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-q5gj6" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.182904 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d885m\" (UniqueName: \"kubernetes.io/projected/0b73227d-0b7b-468c-a0c3-fefa29209aa0-kube-api-access-d885m\") pod \"barbican-operator-controller-manager-75fb479bcc-4n8q6\" (UID: \"0b73227d-0b7b-468c-a0c3-fefa29209aa0\") " pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-4n8q6" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.182920 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q4gd\" (UniqueName: \"kubernetes.io/projected/74fd2f2b-e4c9-465b-928f-adbe316321a4-kube-api-access-9q4gd\") pod \"horizon-operator-controller-manager-598f69df5d-hxrfg\" (UID: \"74fd2f2b-e4c9-465b-928f-adbe316321a4\") " pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-hxrfg" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.182982 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dqvw\" (UniqueName: \"kubernetes.io/projected/95a81c85-d5ed-49a2-a24d-1aa8f5ed1aef-kube-api-access-9dqvw\") pod \"heat-operator-controller-manager-56f54d6746-vrsx6\" (UID: \"95a81c85-d5ed-49a2-a24d-1aa8f5ed1aef\") " pod="openstack-operators/heat-operator-controller-manager-56f54d6746-vrsx6" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.183002 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/915814e7-0e49-4bec-8403-6e95d1008e72-cert\") pod \"infra-operator-controller-manager-6dd8864d7c-g4kfx\" (UID: \"915814e7-0e49-4bec-8403-6e95d1008e72\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-g4kfx" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.210017 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6dd8864d7c-g4kfx"] Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.210127 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-99b499f4-tfdds"] Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.231102 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dqvw\" (UniqueName: \"kubernetes.io/projected/95a81c85-d5ed-49a2-a24d-1aa8f5ed1aef-kube-api-access-9dqvw\") pod \"heat-operator-controller-manager-56f54d6746-vrsx6\" (UID: \"95a81c85-d5ed-49a2-a24d-1aa8f5ed1aef\") " pod="openstack-operators/heat-operator-controller-manager-56f54d6746-vrsx6" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.243243 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d885m\" (UniqueName: \"kubernetes.io/projected/0b73227d-0b7b-468c-a0c3-fefa29209aa0-kube-api-access-d885m\") pod \"barbican-operator-controller-manager-75fb479bcc-4n8q6\" (UID: \"0b73227d-0b7b-468c-a0c3-fefa29209aa0\") " pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-4n8q6" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.254598 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7jnt\" (UniqueName: \"kubernetes.io/projected/f0a7631e-95a4-4bb8-aa13-72b02c833aba-kube-api-access-j7jnt\") pod \"glance-operator-controller-manager-7969689c84-mt9mk\" (UID: \"f0a7631e-95a4-4bb8-aa13-72b02c833aba\") " pod="openstack-operators/glance-operator-controller-manager-7969689c84-mt9mk" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.271140 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxdfg\" (UniqueName: \"kubernetes.io/projected/d7389a19-508e-48aa-81f3-25fc9fd76fbf-kube-api-access-jxdfg\") pod \"cinder-operator-controller-manager-6498cbf48f-q5gj6\" (UID: \"d7389a19-508e-48aa-81f3-25fc9fd76fbf\") " pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-q5gj6" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.275041 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7454b96578-5wh6z"] Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.276033 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-5wh6z" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.280244 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgz8z\" (UniqueName: \"kubernetes.io/projected/d6f07f19-826c-41c8-8861-97ffffe88f6e-kube-api-access-bgz8z\") pod \"designate-operator-controller-manager-767ccfd65f-vcqnx\" (UID: \"d6f07f19-826c-41c8-8861-97ffffe88f6e\") " pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-vcqnx" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.283649 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-v7qrh" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.284400 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/915814e7-0e49-4bec-8403-6e95d1008e72-cert\") pod \"infra-operator-controller-manager-6dd8864d7c-g4kfx\" (UID: \"915814e7-0e49-4bec-8403-6e95d1008e72\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-g4kfx" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.284473 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7w58\" (UniqueName: \"kubernetes.io/projected/89488e43-e2eb-44a1-ac26-fcb0c87047f6-kube-api-access-f7w58\") pod \"keystone-operator-controller-manager-7454b96578-5wh6z\" (UID: \"89488e43-e2eb-44a1-ac26-fcb0c87047f6\") " pod="openstack-operators/keystone-operator-controller-manager-7454b96578-5wh6z" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.284498 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v8r5\" (UniqueName: \"kubernetes.io/projected/661a8eee-259e-40e5-83c5-7d5b78981eb5-kube-api-access-5v8r5\") pod \"ironic-operator-controller-manager-99b499f4-tfdds\" (UID: \"661a8eee-259e-40e5-83c5-7d5b78981eb5\") " pod="openstack-operators/ironic-operator-controller-manager-99b499f4-tfdds" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.284530 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjdlk\" (UniqueName: \"kubernetes.io/projected/915814e7-0e49-4bec-8403-6e95d1008e72-kube-api-access-pjdlk\") pod \"infra-operator-controller-manager-6dd8864d7c-g4kfx\" (UID: \"915814e7-0e49-4bec-8403-6e95d1008e72\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-g4kfx" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.284558 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q4gd\" (UniqueName: \"kubernetes.io/projected/74fd2f2b-e4c9-465b-928f-adbe316321a4-kube-api-access-9q4gd\") pod \"horizon-operator-controller-manager-598f69df5d-hxrfg\" (UID: \"74fd2f2b-e4c9-465b-928f-adbe316321a4\") " pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-hxrfg" Nov 24 11:43:56 crc kubenswrapper[4789]: E1124 11:43:56.284796 4789 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 24 11:43:56 crc kubenswrapper[4789]: E1124 11:43:56.284839 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/915814e7-0e49-4bec-8403-6e95d1008e72-cert podName:915814e7-0e49-4bec-8403-6e95d1008e72 nodeName:}" failed. No retries permitted until 2025-11-24 11:43:56.784823651 +0000 UTC m=+819.367295030 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/915814e7-0e49-4bec-8403-6e95d1008e72-cert") pod "infra-operator-controller-manager-6dd8864d7c-g4kfx" (UID: "915814e7-0e49-4bec-8403-6e95d1008e72") : secret "infra-operator-webhook-server-cert" not found Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.285160 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-58f887965d-kjb9s"] Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.467147 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7969689c84-mt9mk" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.469938 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-vcqnx" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.477651 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-4n8q6" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.493866 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-vrsx6" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.497123 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58f887965d-kjb9s" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.509149 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-q5gj6" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.515568 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjdlk\" (UniqueName: \"kubernetes.io/projected/915814e7-0e49-4bec-8403-6e95d1008e72-kube-api-access-pjdlk\") pod \"infra-operator-controller-manager-6dd8864d7c-g4kfx\" (UID: \"915814e7-0e49-4bec-8403-6e95d1008e72\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-g4kfx" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.518344 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-54b5986bb8-9vtqg"] Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.527715 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-9vtqg" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.561579 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v8r5\" (UniqueName: \"kubernetes.io/projected/661a8eee-259e-40e5-83c5-7d5b78981eb5-kube-api-access-5v8r5\") pod \"ironic-operator-controller-manager-99b499f4-tfdds\" (UID: \"661a8eee-259e-40e5-83c5-7d5b78981eb5\") " pod="openstack-operators/ironic-operator-controller-manager-99b499f4-tfdds" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.562611 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-rctt7" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.562861 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-jzkqr" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.565211 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q4gd\" (UniqueName: \"kubernetes.io/projected/74fd2f2b-e4c9-465b-928f-adbe316321a4-kube-api-access-9q4gd\") pod \"horizon-operator-controller-manager-598f69df5d-hxrfg\" (UID: \"74fd2f2b-e4c9-465b-928f-adbe316321a4\") " pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-hxrfg" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.566568 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78bd47f458-65j74"] Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.569063 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7w58\" (UniqueName: \"kubernetes.io/projected/89488e43-e2eb-44a1-ac26-fcb0c87047f6-kube-api-access-f7w58\") pod \"keystone-operator-controller-manager-7454b96578-5wh6z\" (UID: \"89488e43-e2eb-44a1-ac26-fcb0c87047f6\") " pod="openstack-operators/keystone-operator-controller-manager-7454b96578-5wh6z" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.569844 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-65j74" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.574915 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-q9n2j" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.583492 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58f887965d-kjb9s"] Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.613071 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7w58\" (UniqueName: \"kubernetes.io/projected/89488e43-e2eb-44a1-ac26-fcb0c87047f6-kube-api-access-f7w58\") pod \"keystone-operator-controller-manager-7454b96578-5wh6z\" (UID: \"89488e43-e2eb-44a1-ac26-fcb0c87047f6\") " pod="openstack-operators/keystone-operator-controller-manager-7454b96578-5wh6z" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.620108 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7454b96578-5wh6z"] Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.644494 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78bd47f458-65j74"] Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.646631 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-54b5986bb8-9vtqg"] Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.654859 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-cfbb9c588-zq9m5"] Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.661898 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-jk4w9"] Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.662718 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-zq9m5" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.663923 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-jk4w9" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.665873 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-kwr26" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.666079 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-nxt99" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.671641 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmnj4\" (UniqueName: \"kubernetes.io/projected/92381aad-0739-4a44-948f-c7dc91808a89-kube-api-access-lmnj4\") pod \"mariadb-operator-controller-manager-54b5986bb8-9vtqg\" (UID: \"92381aad-0739-4a44-948f-c7dc91808a89\") " pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-9vtqg" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.671703 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l4pc\" (UniqueName: \"kubernetes.io/projected/f1cdfa4d-b1e5-48c3-b4d7-1b044bfe9592-kube-api-access-9l4pc\") pod \"neutron-operator-controller-manager-78bd47f458-65j74\" (UID: \"f1cdfa4d-b1e5-48c3-b4d7-1b044bfe9592\") " pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-65j74" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.671757 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdb5q\" (UniqueName: \"kubernetes.io/projected/01a1d054-85ac-46b5-94f1-7ec657e0658f-kube-api-access-fdb5q\") pod \"manila-operator-controller-manager-58f887965d-kjb9s\" (UID: \"01a1d054-85ac-46b5-94f1-7ec657e0658f\") " pod="openstack-operators/manila-operator-controller-manager-58f887965d-kjb9s" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.684721 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-jk4w9"] Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.693113 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-cfbb9c588-zq9m5"] Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.700500 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-vq62h"] Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.701756 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-vq62h" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.708634 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.708858 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-pjf87" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.708955 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2brsz"] Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.715707 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-54fc5f65b7-tf44z"] Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.716908 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-tf44z" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.718354 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-kxcqp" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.722854 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-vq62h"] Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.743425 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-54fc5f65b7-tf44z"] Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.762226 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b797b8dff-kdkrp"] Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.763866 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-kdkrp" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.770783 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-zm54h" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.774229 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmnj4\" (UniqueName: \"kubernetes.io/projected/92381aad-0739-4a44-948f-c7dc91808a89-kube-api-access-lmnj4\") pod \"mariadb-operator-controller-manager-54b5986bb8-9vtqg\" (UID: \"92381aad-0739-4a44-948f-c7dc91808a89\") " pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-9vtqg" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.774368 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l4pc\" (UniqueName: \"kubernetes.io/projected/f1cdfa4d-b1e5-48c3-b4d7-1b044bfe9592-kube-api-access-9l4pc\") pod \"neutron-operator-controller-manager-78bd47f458-65j74\" (UID: \"f1cdfa4d-b1e5-48c3-b4d7-1b044bfe9592\") " pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-65j74" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.774506 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdb5q\" (UniqueName: \"kubernetes.io/projected/01a1d054-85ac-46b5-94f1-7ec657e0658f-kube-api-access-fdb5q\") pod \"manila-operator-controller-manager-58f887965d-kjb9s\" (UID: \"01a1d054-85ac-46b5-94f1-7ec657e0658f\") " pod="openstack-operators/manila-operator-controller-manager-58f887965d-kjb9s" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.774622 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7xhb\" (UniqueName: \"kubernetes.io/projected/97d7da9b-f14e-4d8b-9ab0-5607a2a556cf-kube-api-access-m7xhb\") pod \"octavia-operator-controller-manager-54cfbf4c7d-jk4w9\" (UID: \"97d7da9b-f14e-4d8b-9ab0-5607a2a556cf\") " pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-jk4w9" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.774707 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6jbl\" (UniqueName: \"kubernetes.io/projected/6a05bbf2-98dc-4086-ac3e-8a8cf5bd7dc9-kube-api-access-x6jbl\") pod \"nova-operator-controller-manager-cfbb9c588-zq9m5\" (UID: \"6a05bbf2-98dc-4086-ac3e-8a8cf5bd7dc9\") " pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-zq9m5" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.775057 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-hxrfg" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.777608 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b797b8dff-kdkrp"] Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.787633 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-d656998f4-v4frd"] Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.788706 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-d656998f4-v4frd" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.795217 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-d656998f4-v4frd"] Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.796240 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-8xxh4"] Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.797978 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-ffngh" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.798409 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-8xxh4" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.800551 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-c8wcv" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.801116 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdb5q\" (UniqueName: \"kubernetes.io/projected/01a1d054-85ac-46b5-94f1-7ec657e0658f-kube-api-access-fdb5q\") pod \"manila-operator-controller-manager-58f887965d-kjb9s\" (UID: \"01a1d054-85ac-46b5-94f1-7ec657e0658f\") " pod="openstack-operators/manila-operator-controller-manager-58f887965d-kjb9s" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.806249 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l4pc\" (UniqueName: \"kubernetes.io/projected/f1cdfa4d-b1e5-48c3-b4d7-1b044bfe9592-kube-api-access-9l4pc\") pod \"neutron-operator-controller-manager-78bd47f458-65j74\" (UID: \"f1cdfa4d-b1e5-48c3-b4d7-1b044bfe9592\") " pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-65j74" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.812172 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-8xxh4"] Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.816830 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmnj4\" (UniqueName: \"kubernetes.io/projected/92381aad-0739-4a44-948f-c7dc91808a89-kube-api-access-lmnj4\") pod \"mariadb-operator-controller-manager-54b5986bb8-9vtqg\" (UID: \"92381aad-0739-4a44-948f-c7dc91808a89\") " pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-9vtqg" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.819825 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-5wh6z" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.820188 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-tfdds" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.828587 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-b4c496f69-ttb9w"] Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.831278 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-b4c496f69-ttb9w" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.831810 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-b4c496f69-ttb9w"] Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.840482 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-nrjjv" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.844882 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-8c6448b9f-jwwfg"] Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.846508 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-jwwfg" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.849406 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-zrkbf" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.862104 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2brsz" podUID="d81b332f-2cfd-4e55-8a1d-abea95113389" containerName="registry-server" containerID="cri-o://8c52d54908140cfcb365b6a1729a7027eb9a66bf1e7bb2a3d3c70fe2c1cdeada" gracePeriod=2 Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.878430 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xhb\" (UniqueName: \"kubernetes.io/projected/97d7da9b-f14e-4d8b-9ab0-5607a2a556cf-kube-api-access-m7xhb\") pod \"octavia-operator-controller-manager-54cfbf4c7d-jk4w9\" (UID: \"97d7da9b-f14e-4d8b-9ab0-5607a2a556cf\") " pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-jk4w9" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.878735 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6jbl\" (UniqueName: \"kubernetes.io/projected/6a05bbf2-98dc-4086-ac3e-8a8cf5bd7dc9-kube-api-access-x6jbl\") pod \"nova-operator-controller-manager-cfbb9c588-zq9m5\" (UID: \"6a05bbf2-98dc-4086-ac3e-8a8cf5bd7dc9\") " pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-zq9m5" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.878843 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7mrb\" (UniqueName: \"kubernetes.io/projected/57736f24-6289-42e1-918a-cffd058c0e7a-kube-api-access-d7mrb\") pod \"swift-operator-controller-manager-d656998f4-v4frd\" (UID: \"57736f24-6289-42e1-918a-cffd058c0e7a\") " pod="openstack-operators/swift-operator-controller-manager-d656998f4-v4frd" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.878940 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txbwk\" (UniqueName: \"kubernetes.io/projected/40d059bb-9e0e-4bba-bea5-866a064bb150-kube-api-access-txbwk\") pod \"ovn-operator-controller-manager-54fc5f65b7-tf44z\" (UID: \"40d059bb-9e0e-4bba-bea5-866a064bb150\") " pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-tf44z" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.879038 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx27d\" (UniqueName: \"kubernetes.io/projected/42125341-88db-4554-abe6-55807d7d54fa-kube-api-access-sx27d\") pod \"telemetry-operator-controller-manager-6d4bf84b58-8xxh4\" (UID: \"42125341-88db-4554-abe6-55807d7d54fa\") " pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-8xxh4" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.879123 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/123b4cfb-8a48-4e91-8cb7-20a22b3e6b16-cert\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-vq62h\" (UID: \"123b4cfb-8a48-4e91-8cb7-20a22b3e6b16\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-vq62h" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.879195 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkzff\" (UniqueName: \"kubernetes.io/projected/123b4cfb-8a48-4e91-8cb7-20a22b3e6b16-kube-api-access-mkzff\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-vq62h\" (UID: \"123b4cfb-8a48-4e91-8cb7-20a22b3e6b16\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-vq62h" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.879291 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlhlz\" (UniqueName: \"kubernetes.io/projected/879f31f8-27f9-4f20-a9cd-b67373fac926-kube-api-access-dlhlz\") pod \"placement-operator-controller-manager-5b797b8dff-kdkrp\" (UID: \"879f31f8-27f9-4f20-a9cd-b67373fac926\") " pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-kdkrp" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.879378 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/915814e7-0e49-4bec-8403-6e95d1008e72-cert\") pod \"infra-operator-controller-manager-6dd8864d7c-g4kfx\" (UID: \"915814e7-0e49-4bec-8403-6e95d1008e72\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-g4kfx" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.892806 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-8c6448b9f-jwwfg"] Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.893046 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58f887965d-kjb9s" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.893831 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/915814e7-0e49-4bec-8403-6e95d1008e72-cert\") pod \"infra-operator-controller-manager-6dd8864d7c-g4kfx\" (UID: \"915814e7-0e49-4bec-8403-6e95d1008e72\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-g4kfx" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.901714 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xhb\" (UniqueName: \"kubernetes.io/projected/97d7da9b-f14e-4d8b-9ab0-5607a2a556cf-kube-api-access-m7xhb\") pod \"octavia-operator-controller-manager-54cfbf4c7d-jk4w9\" (UID: \"97d7da9b-f14e-4d8b-9ab0-5607a2a556cf\") " pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-jk4w9" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.911572 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-9vtqg" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.937755 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6jbl\" (UniqueName: \"kubernetes.io/projected/6a05bbf2-98dc-4086-ac3e-8a8cf5bd7dc9-kube-api-access-x6jbl\") pod \"nova-operator-controller-manager-cfbb9c588-zq9m5\" (UID: \"6a05bbf2-98dc-4086-ac3e-8a8cf5bd7dc9\") " pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-zq9m5" Nov 24 11:43:56 crc kubenswrapper[4789]: I1124 11:43:56.940693 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-65j74" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.004665 4789 scope.go:117] "RemoveContainer" containerID="c078c59ad551ea68731337258cb7a4e47e8877e4777c6196511d3db6a358c2a4" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.005424 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-zq9m5" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.007833 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlhlz\" (UniqueName: \"kubernetes.io/projected/879f31f8-27f9-4f20-a9cd-b67373fac926-kube-api-access-dlhlz\") pod \"placement-operator-controller-manager-5b797b8dff-kdkrp\" (UID: \"879f31f8-27f9-4f20-a9cd-b67373fac926\") " pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-kdkrp" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.007883 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb7r7\" (UniqueName: \"kubernetes.io/projected/a7de15ed-b91f-490d-bc42-e41e929a22d1-kube-api-access-mb7r7\") pod \"watcher-operator-controller-manager-8c6448b9f-jwwfg\" (UID: \"a7de15ed-b91f-490d-bc42-e41e929a22d1\") " pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-jwwfg" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.007919 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7mrb\" (UniqueName: \"kubernetes.io/projected/57736f24-6289-42e1-918a-cffd058c0e7a-kube-api-access-d7mrb\") pod \"swift-operator-controller-manager-d656998f4-v4frd\" (UID: \"57736f24-6289-42e1-918a-cffd058c0e7a\") " pod="openstack-operators/swift-operator-controller-manager-d656998f4-v4frd" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.007984 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txbwk\" (UniqueName: \"kubernetes.io/projected/40d059bb-9e0e-4bba-bea5-866a064bb150-kube-api-access-txbwk\") pod \"ovn-operator-controller-manager-54fc5f65b7-tf44z\" (UID: \"40d059bb-9e0e-4bba-bea5-866a064bb150\") " pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-tf44z" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.008017 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx27d\" (UniqueName: \"kubernetes.io/projected/42125341-88db-4554-abe6-55807d7d54fa-kube-api-access-sx27d\") pod \"telemetry-operator-controller-manager-6d4bf84b58-8xxh4\" (UID: \"42125341-88db-4554-abe6-55807d7d54fa\") " pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-8xxh4" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.008045 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg9tk\" (UniqueName: \"kubernetes.io/projected/56a28c68-0fee-4c04-9461-7f4f4cb166a8-kube-api-access-pg9tk\") pod \"test-operator-controller-manager-b4c496f69-ttb9w\" (UID: \"56a28c68-0fee-4c04-9461-7f4f4cb166a8\") " pod="openstack-operators/test-operator-controller-manager-b4c496f69-ttb9w" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.008071 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/123b4cfb-8a48-4e91-8cb7-20a22b3e6b16-cert\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-vq62h\" (UID: \"123b4cfb-8a48-4e91-8cb7-20a22b3e6b16\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-vq62h" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.008094 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkzff\" (UniqueName: \"kubernetes.io/projected/123b4cfb-8a48-4e91-8cb7-20a22b3e6b16-kube-api-access-mkzff\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-vq62h\" (UID: \"123b4cfb-8a48-4e91-8cb7-20a22b3e6b16\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-vq62h" Nov 24 11:43:57 crc kubenswrapper[4789]: E1124 11:43:57.009949 4789 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 11:43:57 crc kubenswrapper[4789]: E1124 11:43:57.010073 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/123b4cfb-8a48-4e91-8cb7-20a22b3e6b16-cert podName:123b4cfb-8a48-4e91-8cb7-20a22b3e6b16 nodeName:}" failed. No retries permitted until 2025-11-24 11:43:57.510051104 +0000 UTC m=+820.092522483 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/123b4cfb-8a48-4e91-8cb7-20a22b3e6b16-cert") pod "openstack-baremetal-operator-controller-manager-8c7444f48-vq62h" (UID: "123b4cfb-8a48-4e91-8cb7-20a22b3e6b16") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.018070 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-jk4w9" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.021098 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7cf84c8b4f-2hxj7"] Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.023852 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7cf84c8b4f-2hxj7" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.028963 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-nzw7w" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.029433 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.041571 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlhlz\" (UniqueName: \"kubernetes.io/projected/879f31f8-27f9-4f20-a9cd-b67373fac926-kube-api-access-dlhlz\") pod \"placement-operator-controller-manager-5b797b8dff-kdkrp\" (UID: \"879f31f8-27f9-4f20-a9cd-b67373fac926\") " pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-kdkrp" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.051393 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txbwk\" (UniqueName: \"kubernetes.io/projected/40d059bb-9e0e-4bba-bea5-866a064bb150-kube-api-access-txbwk\") pod \"ovn-operator-controller-manager-54fc5f65b7-tf44z\" (UID: \"40d059bb-9e0e-4bba-bea5-866a064bb150\") " pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-tf44z" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.051865 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx27d\" (UniqueName: \"kubernetes.io/projected/42125341-88db-4554-abe6-55807d7d54fa-kube-api-access-sx27d\") pod \"telemetry-operator-controller-manager-6d4bf84b58-8xxh4\" (UID: \"42125341-88db-4554-abe6-55807d7d54fa\") " pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-8xxh4" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.058579 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7cf84c8b4f-2hxj7"] Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.063191 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkzff\" (UniqueName: \"kubernetes.io/projected/123b4cfb-8a48-4e91-8cb7-20a22b3e6b16-kube-api-access-mkzff\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-vq62h\" (UID: \"123b4cfb-8a48-4e91-8cb7-20a22b3e6b16\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-vq62h" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.065413 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7mrb\" (UniqueName: \"kubernetes.io/projected/57736f24-6289-42e1-918a-cffd058c0e7a-kube-api-access-d7mrb\") pod \"swift-operator-controller-manager-d656998f4-v4frd\" (UID: \"57736f24-6289-42e1-918a-cffd058c0e7a\") " pod="openstack-operators/swift-operator-controller-manager-d656998f4-v4frd" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.085716 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-djvqp"] Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.086563 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-djvqp" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.087682 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-kdkrp" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.090555 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-vxlpb" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.098484 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-djvqp"] Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.100781 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-g4kfx" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.109024 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-d656998f4-v4frd" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.109196 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb7r7\" (UniqueName: \"kubernetes.io/projected/a7de15ed-b91f-490d-bc42-e41e929a22d1-kube-api-access-mb7r7\") pod \"watcher-operator-controller-manager-8c6448b9f-jwwfg\" (UID: \"a7de15ed-b91f-490d-bc42-e41e929a22d1\") " pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-jwwfg" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.109901 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzzl9\" (UniqueName: \"kubernetes.io/projected/553cfbf3-1b3c-4004-9bf9-4b20de969652-kube-api-access-zzzl9\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-djvqp\" (UID: \"553cfbf3-1b3c-4004-9bf9-4b20de969652\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-djvqp" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.110020 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pg9tk\" (UniqueName: \"kubernetes.io/projected/56a28c68-0fee-4c04-9461-7f4f4cb166a8-kube-api-access-pg9tk\") pod \"test-operator-controller-manager-b4c496f69-ttb9w\" (UID: \"56a28c68-0fee-4c04-9461-7f4f4cb166a8\") " pod="openstack-operators/test-operator-controller-manager-b4c496f69-ttb9w" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.110122 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d5ae6f26-3332-445e-a58b-bc1ff6e5b6d1-cert\") pod \"openstack-operator-controller-manager-7cf84c8b4f-2hxj7\" (UID: \"d5ae6f26-3332-445e-a58b-bc1ff6e5b6d1\") " pod="openstack-operators/openstack-operator-controller-manager-7cf84c8b4f-2hxj7" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.110203 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb6f7\" (UniqueName: \"kubernetes.io/projected/d5ae6f26-3332-445e-a58b-bc1ff6e5b6d1-kube-api-access-qb6f7\") pod \"openstack-operator-controller-manager-7cf84c8b4f-2hxj7\" (UID: \"d5ae6f26-3332-445e-a58b-bc1ff6e5b6d1\") " pod="openstack-operators/openstack-operator-controller-manager-7cf84c8b4f-2hxj7" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.138780 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-8xxh4" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.142026 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb7r7\" (UniqueName: \"kubernetes.io/projected/a7de15ed-b91f-490d-bc42-e41e929a22d1-kube-api-access-mb7r7\") pod \"watcher-operator-controller-manager-8c6448b9f-jwwfg\" (UID: \"a7de15ed-b91f-490d-bc42-e41e929a22d1\") " pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-jwwfg" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.163632 4789 scope.go:117] "RemoveContainer" containerID="0a8ed70b3989df9818cbb4004c6d7a1d3ae5eb28d6f194bd34bc18747126d9fd" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.164203 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg9tk\" (UniqueName: \"kubernetes.io/projected/56a28c68-0fee-4c04-9461-7f4f4cb166a8-kube-api-access-pg9tk\") pod \"test-operator-controller-manager-b4c496f69-ttb9w\" (UID: \"56a28c68-0fee-4c04-9461-7f4f4cb166a8\") " pod="openstack-operators/test-operator-controller-manager-b4c496f69-ttb9w" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.175035 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-jwwfg" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.211289 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzzl9\" (UniqueName: \"kubernetes.io/projected/553cfbf3-1b3c-4004-9bf9-4b20de969652-kube-api-access-zzzl9\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-djvqp\" (UID: \"553cfbf3-1b3c-4004-9bf9-4b20de969652\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-djvqp" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.212156 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d5ae6f26-3332-445e-a58b-bc1ff6e5b6d1-cert\") pod \"openstack-operator-controller-manager-7cf84c8b4f-2hxj7\" (UID: \"d5ae6f26-3332-445e-a58b-bc1ff6e5b6d1\") " pod="openstack-operators/openstack-operator-controller-manager-7cf84c8b4f-2hxj7" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.212266 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb6f7\" (UniqueName: \"kubernetes.io/projected/d5ae6f26-3332-445e-a58b-bc1ff6e5b6d1-kube-api-access-qb6f7\") pod \"openstack-operator-controller-manager-7cf84c8b4f-2hxj7\" (UID: \"d5ae6f26-3332-445e-a58b-bc1ff6e5b6d1\") " pod="openstack-operators/openstack-operator-controller-manager-7cf84c8b4f-2hxj7" Nov 24 11:43:57 crc kubenswrapper[4789]: E1124 11:43:57.212775 4789 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 24 11:43:57 crc kubenswrapper[4789]: E1124 11:43:57.212883 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5ae6f26-3332-445e-a58b-bc1ff6e5b6d1-cert podName:d5ae6f26-3332-445e-a58b-bc1ff6e5b6d1 nodeName:}" failed. No retries permitted until 2025-11-24 11:43:57.712870027 +0000 UTC m=+820.295341406 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d5ae6f26-3332-445e-a58b-bc1ff6e5b6d1-cert") pod "openstack-operator-controller-manager-7cf84c8b4f-2hxj7" (UID: "d5ae6f26-3332-445e-a58b-bc1ff6e5b6d1") : secret "webhook-server-cert" not found Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.245235 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzzl9\" (UniqueName: \"kubernetes.io/projected/553cfbf3-1b3c-4004-9bf9-4b20de969652-kube-api-access-zzzl9\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-djvqp\" (UID: \"553cfbf3-1b3c-4004-9bf9-4b20de969652\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-djvqp" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.247835 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb6f7\" (UniqueName: \"kubernetes.io/projected/d5ae6f26-3332-445e-a58b-bc1ff6e5b6d1-kube-api-access-qb6f7\") pod \"openstack-operator-controller-manager-7cf84c8b4f-2hxj7\" (UID: \"d5ae6f26-3332-445e-a58b-bc1ff6e5b6d1\") " pod="openstack-operators/openstack-operator-controller-manager-7cf84c8b4f-2hxj7" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.341496 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-tf44z" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.407735 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-djvqp" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.465720 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-b4c496f69-ttb9w" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.520653 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/123b4cfb-8a48-4e91-8cb7-20a22b3e6b16-cert\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-vq62h\" (UID: \"123b4cfb-8a48-4e91-8cb7-20a22b3e6b16\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-vq62h" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.533831 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/123b4cfb-8a48-4e91-8cb7-20a22b3e6b16-cert\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-vq62h\" (UID: \"123b4cfb-8a48-4e91-8cb7-20a22b3e6b16\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-vq62h" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.627106 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-vq62h" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.723402 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d5ae6f26-3332-445e-a58b-bc1ff6e5b6d1-cert\") pod \"openstack-operator-controller-manager-7cf84c8b4f-2hxj7\" (UID: \"d5ae6f26-3332-445e-a58b-bc1ff6e5b6d1\") " pod="openstack-operators/openstack-operator-controller-manager-7cf84c8b4f-2hxj7" Nov 24 11:43:57 crc kubenswrapper[4789]: E1124 11:43:57.723604 4789 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 24 11:43:57 crc kubenswrapper[4789]: E1124 11:43:57.723651 4789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5ae6f26-3332-445e-a58b-bc1ff6e5b6d1-cert podName:d5ae6f26-3332-445e-a58b-bc1ff6e5b6d1 nodeName:}" failed. No retries permitted until 2025-11-24 11:43:58.723636989 +0000 UTC m=+821.306108368 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d5ae6f26-3332-445e-a58b-bc1ff6e5b6d1-cert") pod "openstack-operator-controller-manager-7cf84c8b4f-2hxj7" (UID: "d5ae6f26-3332-445e-a58b-bc1ff6e5b6d1") : secret "webhook-server-cert" not found Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.893750 4789 generic.go:334] "Generic (PLEG): container finished" podID="d81b332f-2cfd-4e55-8a1d-abea95113389" containerID="8c52d54908140cfcb365b6a1729a7027eb9a66bf1e7bb2a3d3c70fe2c1cdeada" exitCode=0 Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.893795 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2brsz" event={"ID":"d81b332f-2cfd-4e55-8a1d-abea95113389","Type":"ContainerDied","Data":"8c52d54908140cfcb365b6a1729a7027eb9a66bf1e7bb2a3d3c70fe2c1cdeada"} Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.893838 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2brsz" event={"ID":"d81b332f-2cfd-4e55-8a1d-abea95113389","Type":"ContainerDied","Data":"4ff3d823b6523b8365cc4f592897040ca910066f0ff6b54cc43d64fdc72deb87"} Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.893850 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ff3d823b6523b8365cc4f592897040ca910066f0ff6b54cc43d64fdc72deb87" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.936193 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2brsz" Nov 24 11:43:57 crc kubenswrapper[4789]: I1124 11:43:57.968528 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-75fb479bcc-4n8q6"] Nov 24 11:43:57 crc kubenswrapper[4789]: W1124 11:43:57.972912 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b73227d_0b7b_468c_a0c3_fefa29209aa0.slice/crio-5b30b5b3bcaf85585ff038e6ba2a7d595e36c580b1882993b524197b827742dc WatchSource:0}: Error finding container 5b30b5b3bcaf85585ff038e6ba2a7d595e36c580b1882993b524197b827742dc: Status 404 returned error can't find the container with id 5b30b5b3bcaf85585ff038e6ba2a7d595e36c580b1882993b524197b827742dc Nov 24 11:43:58 crc kubenswrapper[4789]: I1124 11:43:58.036654 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d81b332f-2cfd-4e55-8a1d-abea95113389-catalog-content\") pod \"d81b332f-2cfd-4e55-8a1d-abea95113389\" (UID: \"d81b332f-2cfd-4e55-8a1d-abea95113389\") " Nov 24 11:43:58 crc kubenswrapper[4789]: I1124 11:43:58.036767 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d81b332f-2cfd-4e55-8a1d-abea95113389-utilities\") pod \"d81b332f-2cfd-4e55-8a1d-abea95113389\" (UID: \"d81b332f-2cfd-4e55-8a1d-abea95113389\") " Nov 24 11:43:58 crc kubenswrapper[4789]: I1124 11:43:58.036817 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjmfl\" (UniqueName: \"kubernetes.io/projected/d81b332f-2cfd-4e55-8a1d-abea95113389-kube-api-access-wjmfl\") pod \"d81b332f-2cfd-4e55-8a1d-abea95113389\" (UID: \"d81b332f-2cfd-4e55-8a1d-abea95113389\") " Nov 24 11:43:58 crc kubenswrapper[4789]: I1124 11:43:58.041757 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d81b332f-2cfd-4e55-8a1d-abea95113389-kube-api-access-wjmfl" (OuterVolumeSpecName: "kube-api-access-wjmfl") pod "d81b332f-2cfd-4e55-8a1d-abea95113389" (UID: "d81b332f-2cfd-4e55-8a1d-abea95113389"). InnerVolumeSpecName "kube-api-access-wjmfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:43:58 crc kubenswrapper[4789]: I1124 11:43:58.042016 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d81b332f-2cfd-4e55-8a1d-abea95113389-utilities" (OuterVolumeSpecName: "utilities") pod "d81b332f-2cfd-4e55-8a1d-abea95113389" (UID: "d81b332f-2cfd-4e55-8a1d-abea95113389"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:43:58 crc kubenswrapper[4789]: I1124 11:43:58.139602 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjmfl\" (UniqueName: \"kubernetes.io/projected/d81b332f-2cfd-4e55-8a1d-abea95113389-kube-api-access-wjmfl\") on node \"crc\" DevicePath \"\"" Nov 24 11:43:58 crc kubenswrapper[4789]: I1124 11:43:58.139629 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d81b332f-2cfd-4e55-8a1d-abea95113389-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:43:58 crc kubenswrapper[4789]: I1124 11:43:58.240257 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d81b332f-2cfd-4e55-8a1d-abea95113389-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d81b332f-2cfd-4e55-8a1d-abea95113389" (UID: "d81b332f-2cfd-4e55-8a1d-abea95113389"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:43:58 crc kubenswrapper[4789]: I1124 11:43:58.241325 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d81b332f-2cfd-4e55-8a1d-abea95113389-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:43:58 crc kubenswrapper[4789]: I1124 11:43:58.365229 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7969689c84-mt9mk"] Nov 24 11:43:58 crc kubenswrapper[4789]: I1124 11:43:58.382939 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-767ccfd65f-vcqnx"] Nov 24 11:43:58 crc kubenswrapper[4789]: I1124 11:43:58.427912 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-99b499f4-tfdds"] Nov 24 11:43:58 crc kubenswrapper[4789]: I1124 11:43:58.749063 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d5ae6f26-3332-445e-a58b-bc1ff6e5b6d1-cert\") pod \"openstack-operator-controller-manager-7cf84c8b4f-2hxj7\" (UID: \"d5ae6f26-3332-445e-a58b-bc1ff6e5b6d1\") " pod="openstack-operators/openstack-operator-controller-manager-7cf84c8b4f-2hxj7" Nov 24 11:43:58 crc kubenswrapper[4789]: I1124 11:43:58.761481 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d5ae6f26-3332-445e-a58b-bc1ff6e5b6d1-cert\") pod \"openstack-operator-controller-manager-7cf84c8b4f-2hxj7\" (UID: \"d5ae6f26-3332-445e-a58b-bc1ff6e5b6d1\") " pod="openstack-operators/openstack-operator-controller-manager-7cf84c8b4f-2hxj7" Nov 24 11:43:58 crc kubenswrapper[4789]: I1124 11:43:58.827774 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-56f54d6746-vrsx6"] Nov 24 11:43:58 crc kubenswrapper[4789]: I1124 11:43:58.842963 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-54b5986bb8-9vtqg"] Nov 24 11:43:58 crc kubenswrapper[4789]: I1124 11:43:58.849340 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7cf84c8b4f-2hxj7" Nov 24 11:43:58 crc kubenswrapper[4789]: I1124 11:43:58.856199 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78bd47f458-65j74"] Nov 24 11:43:58 crc kubenswrapper[4789]: I1124 11:43:58.862028 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7454b96578-5wh6z"] Nov 24 11:43:58 crc kubenswrapper[4789]: I1124 11:43:58.879596 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-jk4w9"] Nov 24 11:43:58 crc kubenswrapper[4789]: I1124 11:43:58.895274 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58f887965d-kjb9s"] Nov 24 11:43:58 crc kubenswrapper[4789]: I1124 11:43:58.900870 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-cfbb9c588-zq9m5"] Nov 24 11:43:58 crc kubenswrapper[4789]: I1124 11:43:58.917873 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-tfdds" event={"ID":"661a8eee-259e-40e5-83c5-7d5b78981eb5","Type":"ContainerStarted","Data":"ea52c5e0f69140183761541390a502a875404827350d2a7ba70cba603a6ba9aa"} Nov 24 11:43:58 crc kubenswrapper[4789]: I1124 11:43:58.923969 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-598f69df5d-hxrfg"] Nov 24 11:43:58 crc kubenswrapper[4789]: I1124 11:43:58.937256 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-65j74" event={"ID":"f1cdfa4d-b1e5-48c3-b4d7-1b044bfe9592","Type":"ContainerStarted","Data":"e49ba56cffc40da32f5d73ff80d0720a59349a0bfc2eece454eb5f4c8ad3485c"} Nov 24 11:43:58 crc kubenswrapper[4789]: I1124 11:43:58.939891 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-4n8q6" event={"ID":"0b73227d-0b7b-468c-a0c3-fefa29209aa0","Type":"ContainerStarted","Data":"5b30b5b3bcaf85585ff038e6ba2a7d595e36c580b1882993b524197b827742dc"} Nov 24 11:43:58 crc kubenswrapper[4789]: I1124 11:43:58.942297 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-vrsx6" event={"ID":"95a81c85-d5ed-49a2-a24d-1aa8f5ed1aef","Type":"ContainerStarted","Data":"b258494bb7aaf5a66aabae677cb7c69b9c15101d18076e9592e6bbc5a5ff6e0e"} Nov 24 11:43:58 crc kubenswrapper[4789]: I1124 11:43:58.943685 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-vcqnx" event={"ID":"d6f07f19-826c-41c8-8861-97ffffe88f6e","Type":"ContainerStarted","Data":"d26ab6c8cd3189f72cd802d420e20981fd790fc542e7ac6731ad29edd99e620c"} Nov 24 11:43:58 crc kubenswrapper[4789]: I1124 11:43:58.947639 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7969689c84-mt9mk" event={"ID":"f0a7631e-95a4-4bb8-aa13-72b02c833aba","Type":"ContainerStarted","Data":"10f0a9af547c6e42ac228fcdad6569ad0c9a02b4a6122161f120d7164863d829"} Nov 24 11:43:58 crc kubenswrapper[4789]: W1124 11:43:58.950143 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92381aad_0739_4a44_948f_c7dc91808a89.slice/crio-04aac6f3f4297edcc0ce11b5ee4b1ff4bef935593611e5a2e7137fdd2cf11a22 WatchSource:0}: Error finding container 04aac6f3f4297edcc0ce11b5ee4b1ff4bef935593611e5a2e7137fdd2cf11a22: Status 404 returned error can't find the container with id 04aac6f3f4297edcc0ce11b5ee4b1ff4bef935593611e5a2e7137fdd2cf11a22 Nov 24 11:43:58 crc kubenswrapper[4789]: I1124 11:43:58.952321 4789 generic.go:334] "Generic (PLEG): container finished" podID="023c49aa-b48c-4320-a70f-3d9d969fa712" containerID="dfc0372261dc665b55bfa1932714f00c71727735d066b5c8fb86f94af185730c" exitCode=0 Nov 24 11:43:58 crc kubenswrapper[4789]: I1124 11:43:58.952410 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2brsz" Nov 24 11:43:58 crc kubenswrapper[4789]: I1124 11:43:58.953184 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jfrbf" event={"ID":"023c49aa-b48c-4320-a70f-3d9d969fa712","Type":"ContainerDied","Data":"dfc0372261dc665b55bfa1932714f00c71727735d066b5c8fb86f94af185730c"} Nov 24 11:43:58 crc kubenswrapper[4789]: W1124 11:43:58.958842 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01a1d054_85ac_46b5_94f1_7ec657e0658f.slice/crio-b31db37765df187e16751cba57a361d19584b9569e114d6348fdf67310242e94 WatchSource:0}: Error finding container b31db37765df187e16751cba57a361d19584b9569e114d6348fdf67310242e94: Status 404 returned error can't find the container with id b31db37765df187e16751cba57a361d19584b9569e114d6348fdf67310242e94 Nov 24 11:43:58 crc kubenswrapper[4789]: W1124 11:43:58.984892 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74fd2f2b_e4c9_465b_928f_adbe316321a4.slice/crio-503af6cf7427012bcc8ca2b151eb1975e73993f83ea28a05aa4a900c0c1078ee WatchSource:0}: Error finding container 503af6cf7427012bcc8ca2b151eb1975e73993f83ea28a05aa4a900c0c1078ee: Status 404 returned error can't find the container with id 503af6cf7427012bcc8ca2b151eb1975e73993f83ea28a05aa4a900c0c1078ee Nov 24 11:43:59 crc kubenswrapper[4789]: W1124 11:43:59.026323 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7389a19_508e_48aa_81f3_25fc9fd76fbf.slice/crio-45c9b24d27c1500981d7fe86dd18d73272161e7e3dd47413795422d527e19c40 WatchSource:0}: Error finding container 45c9b24d27c1500981d7fe86dd18d73272161e7e3dd47413795422d527e19c40: Status 404 returned error can't find the container with id 45c9b24d27c1500981d7fe86dd18d73272161e7e3dd47413795422d527e19c40 Nov 24 11:43:59 crc kubenswrapper[4789]: I1124 11:43:59.032242 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6498cbf48f-q5gj6"] Nov 24 11:43:59 crc kubenswrapper[4789]: I1124 11:43:59.040034 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2brsz"] Nov 24 11:43:59 crc kubenswrapper[4789]: I1124 11:43:59.044856 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2brsz"] Nov 24 11:43:59 crc kubenswrapper[4789]: I1124 11:43:59.265043 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-b4c496f69-ttb9w"] Nov 24 11:43:59 crc kubenswrapper[4789]: I1124 11:43:59.283150 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b797b8dff-kdkrp"] Nov 24 11:43:59 crc kubenswrapper[4789]: I1124 11:43:59.288418 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-vq62h"] Nov 24 11:43:59 crc kubenswrapper[4789]: I1124 11:43:59.311367 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-8xxh4"] Nov 24 11:43:59 crc kubenswrapper[4789]: I1124 11:43:59.334324 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6dd8864d7c-g4kfx"] Nov 24 11:43:59 crc kubenswrapper[4789]: I1124 11:43:59.338596 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-54fc5f65b7-tf44z"] Nov 24 11:43:59 crc kubenswrapper[4789]: I1124 11:43:59.343584 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-d656998f4-v4frd"] Nov 24 11:43:59 crc kubenswrapper[4789]: W1124 11:43:59.349732 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod879f31f8_27f9_4f20_a9cd_b67373fac926.slice/crio-ae59d7bc27243a5adbb8179b2414377132b2a25b3f9e83981bb3d20bf182f82b WatchSource:0}: Error finding container ae59d7bc27243a5adbb8179b2414377132b2a25b3f9e83981bb3d20bf182f82b: Status 404 returned error can't find the container with id ae59d7bc27243a5adbb8179b2414377132b2a25b3f9e83981bb3d20bf182f82b Nov 24 11:43:59 crc kubenswrapper[4789]: I1124 11:43:59.349775 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-djvqp"] Nov 24 11:43:59 crc kubenswrapper[4789]: I1124 11:43:59.354127 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-8c6448b9f-jwwfg"] Nov 24 11:43:59 crc kubenswrapper[4789]: W1124 11:43:59.355583 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod123b4cfb_8a48_4e91_8cb7_20a22b3e6b16.slice/crio-8afc3f68f6ca1c17576aef675effbd313499526d5a7cc1fd3a393ec47545561b WatchSource:0}: Error finding container 8afc3f68f6ca1c17576aef675effbd313499526d5a7cc1fd3a393ec47545561b: Status 404 returned error can't find the container with id 8afc3f68f6ca1c17576aef675effbd313499526d5a7cc1fd3a393ec47545561b Nov 24 11:43:59 crc kubenswrapper[4789]: E1124 11:43:59.364411 4789 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pg9tk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-b4c496f69-ttb9w_openstack-operators(56a28c68-0fee-4c04-9461-7f4f4cb166a8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 11:43:59 crc kubenswrapper[4789]: E1124 11:43:59.366159 4789 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sx27d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-6d4bf84b58-8xxh4_openstack-operators(42125341-88db-4554-abe6-55807d7d54fa): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 11:43:59 crc kubenswrapper[4789]: W1124 11:43:59.371652 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57736f24_6289_42e1_918a_cffd058c0e7a.slice/crio-d868faff9aa9801328a90d37e5b1b99677aa993a22972e4065f14fc41974013c WatchSource:0}: Error finding container d868faff9aa9801328a90d37e5b1b99677aa993a22972e4065f14fc41974013c: Status 404 returned error can't find the container with id d868faff9aa9801328a90d37e5b1b99677aa993a22972e4065f14fc41974013c Nov 24 11:43:59 crc kubenswrapper[4789]: E1124 11:43:59.371778 4789 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mkzff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-8c7444f48-vq62h_openstack-operators(123b4cfb-8a48-4e91-8cb7-20a22b3e6b16): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 11:43:59 crc kubenswrapper[4789]: E1124 11:43:59.390781 4789 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d7mrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-d656998f4-v4frd_openstack-operators(57736f24-6289-42e1-918a-cffd058c0e7a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 11:43:59 crc kubenswrapper[4789]: W1124 11:43:59.400010 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod553cfbf3_1b3c_4004_9bf9_4b20de969652.slice/crio-a8af85b17fbbfd30ce88de71de091d453307f0070122b7a55615e4c4a7dc41a5 WatchSource:0}: Error finding container a8af85b17fbbfd30ce88de71de091d453307f0070122b7a55615e4c4a7dc41a5: Status 404 returned error can't find the container with id a8af85b17fbbfd30ce88de71de091d453307f0070122b7a55615e4c4a7dc41a5 Nov 24 11:43:59 crc kubenswrapper[4789]: W1124 11:43:59.405715 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod915814e7_0e49_4bec_8403_6e95d1008e72.slice/crio-ba27a41ad1ae71b8673f5526cf585f0c0099d020e3acb47a4cb88e4267b98181 WatchSource:0}: Error finding container ba27a41ad1ae71b8673f5526cf585f0c0099d020e3acb47a4cb88e4267b98181: Status 404 returned error can't find the container with id ba27a41ad1ae71b8673f5526cf585f0c0099d020e3acb47a4cb88e4267b98181 Nov 24 11:43:59 crc kubenswrapper[4789]: W1124 11:43:59.411579 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40d059bb_9e0e_4bba_bea5_866a064bb150.slice/crio-56a55c6ac9a7d80d11c6171f2b0700af7d3a9d8fee3b4dc318b564d9d97ca05b WatchSource:0}: Error finding container 56a55c6ac9a7d80d11c6171f2b0700af7d3a9d8fee3b4dc318b564d9d97ca05b: Status 404 returned error can't find the container with id 56a55c6ac9a7d80d11c6171f2b0700af7d3a9d8fee3b4dc318b564d9d97ca05b Nov 24 11:43:59 crc kubenswrapper[4789]: E1124 11:43:59.434487 4789 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-txbwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-54fc5f65b7-tf44z_openstack-operators(40d059bb-9e0e-4bba-bea5-866a064bb150): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 11:43:59 crc kubenswrapper[4789]: E1124 11:43:59.435036 4789 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zzzl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-5f97d8c699-djvqp_openstack-operators(553cfbf3-1b3c-4004-9bf9-4b20de969652): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 11:43:59 crc kubenswrapper[4789]: E1124 11:43:59.435230 4789 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:86df58f744c1d23233cc98f6ea17c8d6da637c50003d0fc8c100045594aa9894,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pjdlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-6dd8864d7c-g4kfx_openstack-operators(915814e7-0e49-4bec-8403-6e95d1008e72): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 11:43:59 crc kubenswrapper[4789]: E1124 11:43:59.436588 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-djvqp" podUID="553cfbf3-1b3c-4004-9bf9-4b20de969652" Nov 24 11:43:59 crc kubenswrapper[4789]: I1124 11:43:59.453137 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7cf84c8b4f-2hxj7"] Nov 24 11:43:59 crc kubenswrapper[4789]: E1124 11:43:59.643197 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-b4c496f69-ttb9w" podUID="56a28c68-0fee-4c04-9461-7f4f4cb166a8" Nov 24 11:43:59 crc kubenswrapper[4789]: E1124 11:43:59.651472 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-8xxh4" podUID="42125341-88db-4554-abe6-55807d7d54fa" Nov 24 11:43:59 crc kubenswrapper[4789]: E1124 11:43:59.683083 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-vq62h" podUID="123b4cfb-8a48-4e91-8cb7-20a22b3e6b16" Nov 24 11:43:59 crc kubenswrapper[4789]: E1124 11:43:59.838518 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-d656998f4-v4frd" podUID="57736f24-6289-42e1-918a-cffd058c0e7a" Nov 24 11:43:59 crc kubenswrapper[4789]: E1124 11:43:59.879942 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-g4kfx" podUID="915814e7-0e49-4bec-8403-6e95d1008e72" Nov 24 11:43:59 crc kubenswrapper[4789]: E1124 11:43:59.880025 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-tf44z" podUID="40d059bb-9e0e-4bba-bea5-866a064bb150" Nov 24 11:43:59 crc kubenswrapper[4789]: I1124 11:43:59.973246 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-b4c496f69-ttb9w" event={"ID":"56a28c68-0fee-4c04-9461-7f4f4cb166a8","Type":"ContainerStarted","Data":"aed0fb706d4e5739267143baf27f8a1117876c2289443e7716b117ddeee87d84"} Nov 24 11:43:59 crc kubenswrapper[4789]: I1124 11:43:59.973596 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-b4c496f69-ttb9w" event={"ID":"56a28c68-0fee-4c04-9461-7f4f4cb166a8","Type":"ContainerStarted","Data":"f461e8fced0ee98e60269d5835b33a8edf8fe5229474bed280d0a8d9edb22c19"} Nov 24 11:43:59 crc kubenswrapper[4789]: E1124 11:43:59.980635 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d\\\"\"" pod="openstack-operators/test-operator-controller-manager-b4c496f69-ttb9w" podUID="56a28c68-0fee-4c04-9461-7f4f4cb166a8" Nov 24 11:43:59 crc kubenswrapper[4789]: I1124 11:43:59.982239 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d656998f4-v4frd" event={"ID":"57736f24-6289-42e1-918a-cffd058c0e7a","Type":"ContainerStarted","Data":"3518867de10920e17c60574fddf2d9cc7afc7457c6bf73c7e4ffdce778f80851"} Nov 24 11:43:59 crc kubenswrapper[4789]: I1124 11:43:59.982272 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d656998f4-v4frd" event={"ID":"57736f24-6289-42e1-918a-cffd058c0e7a","Type":"ContainerStarted","Data":"d868faff9aa9801328a90d37e5b1b99677aa993a22972e4065f14fc41974013c"} Nov 24 11:43:59 crc kubenswrapper[4789]: E1124 11:43:59.984668 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0\\\"\"" pod="openstack-operators/swift-operator-controller-manager-d656998f4-v4frd" podUID="57736f24-6289-42e1-918a-cffd058c0e7a" Nov 24 11:44:00 crc kubenswrapper[4789]: I1124 11:44:00.015669 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-jk4w9" event={"ID":"97d7da9b-f14e-4d8b-9ab0-5607a2a556cf","Type":"ContainerStarted","Data":"e3b72f6018d77ef80348764c291b04abd6eaf0af370e0ecb7ebf9e9090bdf697"} Nov 24 11:44:00 crc kubenswrapper[4789]: I1124 11:44:00.145907 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-8xxh4" event={"ID":"42125341-88db-4554-abe6-55807d7d54fa","Type":"ContainerStarted","Data":"267000cd6c28debbddf0e06cdce41dabee26fc189e6a605374215c63723bca45"} Nov 24 11:44:00 crc kubenswrapper[4789]: I1124 11:44:00.145962 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-8xxh4" event={"ID":"42125341-88db-4554-abe6-55807d7d54fa","Type":"ContainerStarted","Data":"4e584a430df014e8d3005faa9d61491f714f8939071af034a0a0ffaee4ed7126"} Nov 24 11:44:00 crc kubenswrapper[4789]: E1124 11:44:00.160935 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-8xxh4" podUID="42125341-88db-4554-abe6-55807d7d54fa" Nov 24 11:44:00 crc kubenswrapper[4789]: I1124 11:44:00.184773 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d81b332f-2cfd-4e55-8a1d-abea95113389" path="/var/lib/kubelet/pods/d81b332f-2cfd-4e55-8a1d-abea95113389/volumes" Nov 24 11:44:00 crc kubenswrapper[4789]: I1124 11:44:00.210925 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-djvqp" event={"ID":"553cfbf3-1b3c-4004-9bf9-4b20de969652","Type":"ContainerStarted","Data":"a8af85b17fbbfd30ce88de71de091d453307f0070122b7a55615e4c4a7dc41a5"} Nov 24 11:44:00 crc kubenswrapper[4789]: E1124 11:44:00.211100 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-djvqp" podUID="553cfbf3-1b3c-4004-9bf9-4b20de969652" Nov 24 11:44:00 crc kubenswrapper[4789]: I1124 11:44:00.265678 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-jwwfg" event={"ID":"a7de15ed-b91f-490d-bc42-e41e929a22d1","Type":"ContainerStarted","Data":"8b89b94a896587c36bc87c063f12288d5e7f841e990a6b03ffc549f5c2414ac1"} Nov 24 11:44:00 crc kubenswrapper[4789]: I1124 11:44:00.287756 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-vq62h" event={"ID":"123b4cfb-8a48-4e91-8cb7-20a22b3e6b16","Type":"ContainerStarted","Data":"21e3ff8ffa516ca2e3e3a55621cdd4fe96b19833cc5d7cd1c847ae3aaa242787"} Nov 24 11:44:00 crc kubenswrapper[4789]: I1124 11:44:00.287870 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-vq62h" event={"ID":"123b4cfb-8a48-4e91-8cb7-20a22b3e6b16","Type":"ContainerStarted","Data":"8afc3f68f6ca1c17576aef675effbd313499526d5a7cc1fd3a393ec47545561b"} Nov 24 11:44:00 crc kubenswrapper[4789]: E1124 11:44:00.305942 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-vq62h" podUID="123b4cfb-8a48-4e91-8cb7-20a22b3e6b16" Nov 24 11:44:00 crc kubenswrapper[4789]: I1124 11:44:00.324135 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-tf44z" event={"ID":"40d059bb-9e0e-4bba-bea5-866a064bb150","Type":"ContainerStarted","Data":"be1c48d382464efb49eb563c4346a5f05e4a5ff2c18c7dced69b4f8e33d3e083"} Nov 24 11:44:00 crc kubenswrapper[4789]: I1124 11:44:00.324179 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-tf44z" event={"ID":"40d059bb-9e0e-4bba-bea5-866a064bb150","Type":"ContainerStarted","Data":"56a55c6ac9a7d80d11c6171f2b0700af7d3a9d8fee3b4dc318b564d9d97ca05b"} Nov 24 11:44:00 crc kubenswrapper[4789]: E1124 11:44:00.330951 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-tf44z" podUID="40d059bb-9e0e-4bba-bea5-866a064bb150" Nov 24 11:44:00 crc kubenswrapper[4789]: I1124 11:44:00.341385 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58f887965d-kjb9s" event={"ID":"01a1d054-85ac-46b5-94f1-7ec657e0658f","Type":"ContainerStarted","Data":"b31db37765df187e16751cba57a361d19584b9569e114d6348fdf67310242e94"} Nov 24 11:44:00 crc kubenswrapper[4789]: I1124 11:44:00.343403 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-kdkrp" event={"ID":"879f31f8-27f9-4f20-a9cd-b67373fac926","Type":"ContainerStarted","Data":"ae59d7bc27243a5adbb8179b2414377132b2a25b3f9e83981bb3d20bf182f82b"} Nov 24 11:44:00 crc kubenswrapper[4789]: I1124 11:44:00.388867 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-g4kfx" event={"ID":"915814e7-0e49-4bec-8403-6e95d1008e72","Type":"ContainerStarted","Data":"afee4a0b4f469640eccd540e68c5649de17b3788db138971d26a4ab33789f152"} Nov 24 11:44:00 crc kubenswrapper[4789]: I1124 11:44:00.388924 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-g4kfx" event={"ID":"915814e7-0e49-4bec-8403-6e95d1008e72","Type":"ContainerStarted","Data":"ba27a41ad1ae71b8673f5526cf585f0c0099d020e3acb47a4cb88e4267b98181"} Nov 24 11:44:00 crc kubenswrapper[4789]: E1124 11:44:00.395229 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:86df58f744c1d23233cc98f6ea17c8d6da637c50003d0fc8c100045594aa9894\\\"\"" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-g4kfx" podUID="915814e7-0e49-4bec-8403-6e95d1008e72" Nov 24 11:44:00 crc kubenswrapper[4789]: I1124 11:44:00.408260 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-hxrfg" event={"ID":"74fd2f2b-e4c9-465b-928f-adbe316321a4","Type":"ContainerStarted","Data":"503af6cf7427012bcc8ca2b151eb1975e73993f83ea28a05aa4a900c0c1078ee"} Nov 24 11:44:00 crc kubenswrapper[4789]: I1124 11:44:00.409610 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-9vtqg" event={"ID":"92381aad-0739-4a44-948f-c7dc91808a89","Type":"ContainerStarted","Data":"04aac6f3f4297edcc0ce11b5ee4b1ff4bef935593611e5a2e7137fdd2cf11a22"} Nov 24 11:44:00 crc kubenswrapper[4789]: I1124 11:44:00.442667 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-5wh6z" event={"ID":"89488e43-e2eb-44a1-ac26-fcb0c87047f6","Type":"ContainerStarted","Data":"da23b19c5615aae99010cfa563f27b84b5b30b57704399e41b9563c1e2269a66"} Nov 24 11:44:00 crc kubenswrapper[4789]: I1124 11:44:00.451017 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-zq9m5" event={"ID":"6a05bbf2-98dc-4086-ac3e-8a8cf5bd7dc9","Type":"ContainerStarted","Data":"63dd422484e52fb3e244619b09ba5ec5c168135cafccfcc99dc87469fe60ca42"} Nov 24 11:44:00 crc kubenswrapper[4789]: I1124 11:44:00.458012 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-q5gj6" event={"ID":"d7389a19-508e-48aa-81f3-25fc9fd76fbf","Type":"ContainerStarted","Data":"45c9b24d27c1500981d7fe86dd18d73272161e7e3dd47413795422d527e19c40"} Nov 24 11:44:00 crc kubenswrapper[4789]: I1124 11:44:00.462616 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7cf84c8b4f-2hxj7" event={"ID":"d5ae6f26-3332-445e-a58b-bc1ff6e5b6d1","Type":"ContainerStarted","Data":"8bfb749478c732a4e06a19a652ad5b5753f8d41aab30984160fe1c78c0492511"} Nov 24 11:44:00 crc kubenswrapper[4789]: I1124 11:44:00.462642 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7cf84c8b4f-2hxj7" event={"ID":"d5ae6f26-3332-445e-a58b-bc1ff6e5b6d1","Type":"ContainerStarted","Data":"5e179485f639c1256da0e22d1f2569101d70b161f6005c07d4e3e88462ec9517"} Nov 24 11:44:00 crc kubenswrapper[4789]: I1124 11:44:00.463474 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7cf84c8b4f-2hxj7" Nov 24 11:44:00 crc kubenswrapper[4789]: I1124 11:44:00.494937 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jfrbf" event={"ID":"023c49aa-b48c-4320-a70f-3d9d969fa712","Type":"ContainerStarted","Data":"b823d086d670ab89843a1ae87ecf10a469f886bb53aec4e383c07f8d50b8af03"} Nov 24 11:44:00 crc kubenswrapper[4789]: I1124 11:44:00.520888 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7cf84c8b4f-2hxj7" podStartSLOduration=4.52087229 podStartE2EDuration="4.52087229s" podCreationTimestamp="2025-11-24 11:43:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:44:00.520812909 +0000 UTC m=+823.103284288" watchObservedRunningTime="2025-11-24 11:44:00.52087229 +0000 UTC m=+823.103343669" Nov 24 11:44:00 crc kubenswrapper[4789]: I1124 11:44:00.571985 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jfrbf" podStartSLOduration=3.788287701 podStartE2EDuration="13.571969075s" podCreationTimestamp="2025-11-24 11:43:47 +0000 UTC" firstStartedPulling="2025-11-24 11:43:49.816246683 +0000 UTC m=+812.398718062" lastFinishedPulling="2025-11-24 11:43:59.599928057 +0000 UTC m=+822.182399436" observedRunningTime="2025-11-24 11:44:00.570003699 +0000 UTC m=+823.152475098" watchObservedRunningTime="2025-11-24 11:44:00.571969075 +0000 UTC m=+823.154440454" Nov 24 11:44:01 crc kubenswrapper[4789]: I1124 11:44:01.507779 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7cf84c8b4f-2hxj7" event={"ID":"d5ae6f26-3332-445e-a58b-bc1ff6e5b6d1","Type":"ContainerStarted","Data":"ac3fc53bff9ba6ae7d0ec8aae98c62125752c7e5b23b31e296562cc756e75023"} Nov 24 11:44:01 crc kubenswrapper[4789]: E1124 11:44:01.512782 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-djvqp" podUID="553cfbf3-1b3c-4004-9bf9-4b20de969652" Nov 24 11:44:01 crc kubenswrapper[4789]: E1124 11:44:01.513008 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:86df58f744c1d23233cc98f6ea17c8d6da637c50003d0fc8c100045594aa9894\\\"\"" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-g4kfx" podUID="915814e7-0e49-4bec-8403-6e95d1008e72" Nov 24 11:44:01 crc kubenswrapper[4789]: E1124 11:44:01.513041 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-tf44z" podUID="40d059bb-9e0e-4bba-bea5-866a064bb150" Nov 24 11:44:01 crc kubenswrapper[4789]: E1124 11:44:01.513097 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0\\\"\"" pod="openstack-operators/swift-operator-controller-manager-d656998f4-v4frd" podUID="57736f24-6289-42e1-918a-cffd058c0e7a" Nov 24 11:44:01 crc kubenswrapper[4789]: E1124 11:44:01.513171 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d\\\"\"" pod="openstack-operators/test-operator-controller-manager-b4c496f69-ttb9w" podUID="56a28c68-0fee-4c04-9461-7f4f4cb166a8" Nov 24 11:44:01 crc kubenswrapper[4789]: E1124 11:44:01.513440 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-vq62h" podUID="123b4cfb-8a48-4e91-8cb7-20a22b3e6b16" Nov 24 11:44:01 crc kubenswrapper[4789]: E1124 11:44:01.513678 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-8xxh4" podUID="42125341-88db-4554-abe6-55807d7d54fa" Nov 24 11:44:08 crc kubenswrapper[4789]: I1124 11:44:08.184203 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jfrbf" Nov 24 11:44:08 crc kubenswrapper[4789]: I1124 11:44:08.184588 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jfrbf" Nov 24 11:44:08 crc kubenswrapper[4789]: I1124 11:44:08.227679 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jfrbf" Nov 24 11:44:08 crc kubenswrapper[4789]: I1124 11:44:08.635364 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jfrbf" Nov 24 11:44:08 crc kubenswrapper[4789]: I1124 11:44:08.856333 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7cf84c8b4f-2hxj7" Nov 24 11:44:09 crc kubenswrapper[4789]: I1124 11:44:09.602234 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jfrbf"] Nov 24 11:44:09 crc kubenswrapper[4789]: I1124 11:44:09.659516 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gsg89"] Nov 24 11:44:09 crc kubenswrapper[4789]: I1124 11:44:09.660436 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gsg89" podUID="d070801e-b0f9-43f1-9521-c3548067d7cb" containerName="registry-server" containerID="cri-o://451ae45856987934fdeb4925b119ed33cc2eff217ade7b9eb5674bc94d9bddbf" gracePeriod=2 Nov 24 11:44:10 crc kubenswrapper[4789]: I1124 11:44:10.597262 4789 generic.go:334] "Generic (PLEG): container finished" podID="d070801e-b0f9-43f1-9521-c3548067d7cb" containerID="451ae45856987934fdeb4925b119ed33cc2eff217ade7b9eb5674bc94d9bddbf" exitCode=0 Nov 24 11:44:10 crc kubenswrapper[4789]: I1124 11:44:10.597347 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gsg89" event={"ID":"d070801e-b0f9-43f1-9521-c3548067d7cb","Type":"ContainerDied","Data":"451ae45856987934fdeb4925b119ed33cc2eff217ade7b9eb5674bc94d9bddbf"} Nov 24 11:44:14 crc kubenswrapper[4789]: E1124 11:44:14.001644 4789 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:b749a5dd8bc718875c3f5e81b38d54d003be77ab92de4a3e9f9595566496a58a" Nov 24 11:44:14 crc kubenswrapper[4789]: E1124 11:44:14.002129 4789 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:b749a5dd8bc718875c3f5e81b38d54d003be77ab92de4a3e9f9595566496a58a,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fdb5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-58f887965d-kjb9s_openstack-operators(01a1d054-85ac-46b5-94f1-7ec657e0658f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:44:15 crc kubenswrapper[4789]: E1124 11:44:15.518242 4789 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:848f4c43c6bdd4e33e3ce1d147a85b9b6a6124a150bd5155dce421ef539259e9" Nov 24 11:44:15 crc kubenswrapper[4789]: E1124 11:44:15.518754 4789 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:848f4c43c6bdd4e33e3ce1d147a85b9b6a6124a150bd5155dce421ef539259e9,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9q4gd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-598f69df5d-hxrfg_openstack-operators(74fd2f2b-e4c9-465b-928f-adbe316321a4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:44:15 crc kubenswrapper[4789]: E1124 11:44:15.994562 4789 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:d38faa9070da05487afdaa9e261ad39274c2ed862daf42efa460a040431f1991" Nov 24 11:44:15 crc kubenswrapper[4789]: E1124 11:44:15.994743 4789 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:d38faa9070da05487afdaa9e261ad39274c2ed862daf42efa460a040431f1991,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j7jnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-7969689c84-mt9mk_openstack-operators(f0a7631e-95a4-4bb8-aa13-72b02c833aba): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:44:17 crc kubenswrapper[4789]: E1124 11:44:17.076840 4789 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:5edd825a235f5784d9a65892763c5388c39df1731d0fcbf4ee33408b8c83ac96" Nov 24 11:44:17 crc kubenswrapper[4789]: E1124 11:44:17.077050 4789 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:5edd825a235f5784d9a65892763c5388c39df1731d0fcbf4ee33408b8c83ac96,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9dqvw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-56f54d6746-vrsx6_openstack-operators(95a81c85-d5ed-49a2-a24d-1aa8f5ed1aef): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:44:17 crc kubenswrapper[4789]: E1124 11:44:17.738965 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 451ae45856987934fdeb4925b119ed33cc2eff217ade7b9eb5674bc94d9bddbf is running failed: container process not found" containerID="451ae45856987934fdeb4925b119ed33cc2eff217ade7b9eb5674bc94d9bddbf" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 11:44:17 crc kubenswrapper[4789]: E1124 11:44:17.739687 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 451ae45856987934fdeb4925b119ed33cc2eff217ade7b9eb5674bc94d9bddbf is running failed: container process not found" containerID="451ae45856987934fdeb4925b119ed33cc2eff217ade7b9eb5674bc94d9bddbf" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 11:44:17 crc kubenswrapper[4789]: E1124 11:44:17.740433 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 451ae45856987934fdeb4925b119ed33cc2eff217ade7b9eb5674bc94d9bddbf is running failed: container process not found" containerID="451ae45856987934fdeb4925b119ed33cc2eff217ade7b9eb5674bc94d9bddbf" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 11:44:17 crc kubenswrapper[4789]: E1124 11:44:17.740481 4789 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 451ae45856987934fdeb4925b119ed33cc2eff217ade7b9eb5674bc94d9bddbf is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-gsg89" podUID="d070801e-b0f9-43f1-9521-c3548067d7cb" containerName="registry-server" Nov 24 11:44:18 crc kubenswrapper[4789]: E1124 11:44:18.177243 4789 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13" Nov 24 11:44:18 crc kubenswrapper[4789]: E1124 11:44:18.177600 4789 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m7xhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-54cfbf4c7d-jk4w9_openstack-operators(97d7da9b-f14e-4d8b-9ab0-5607a2a556cf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:44:19 crc kubenswrapper[4789]: E1124 11:44:19.351921 4789 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6" Nov 24 11:44:19 crc kubenswrapper[4789]: E1124 11:44:19.354103 4789 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9l4pc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-78bd47f458-65j74_openstack-operators(f1cdfa4d-b1e5-48c3-b4d7-1b044bfe9592): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:44:19 crc kubenswrapper[4789]: E1124 11:44:19.696879 4789 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04" Nov 24 11:44:19 crc kubenswrapper[4789]: E1124 11:44:19.697050 4789 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lmnj4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-54b5986bb8-9vtqg_openstack-operators(92381aad-0739-4a44-948f-c7dc91808a89): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:44:22 crc kubenswrapper[4789]: I1124 11:44:22.172014 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gsg89" Nov 24 11:44:22 crc kubenswrapper[4789]: I1124 11:44:22.326972 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d070801e-b0f9-43f1-9521-c3548067d7cb-catalog-content\") pod \"d070801e-b0f9-43f1-9521-c3548067d7cb\" (UID: \"d070801e-b0f9-43f1-9521-c3548067d7cb\") " Nov 24 11:44:22 crc kubenswrapper[4789]: I1124 11:44:22.327119 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mspkk\" (UniqueName: \"kubernetes.io/projected/d070801e-b0f9-43f1-9521-c3548067d7cb-kube-api-access-mspkk\") pod \"d070801e-b0f9-43f1-9521-c3548067d7cb\" (UID: \"d070801e-b0f9-43f1-9521-c3548067d7cb\") " Nov 24 11:44:22 crc kubenswrapper[4789]: I1124 11:44:22.327147 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d070801e-b0f9-43f1-9521-c3548067d7cb-utilities\") pod \"d070801e-b0f9-43f1-9521-c3548067d7cb\" (UID: \"d070801e-b0f9-43f1-9521-c3548067d7cb\") " Nov 24 11:44:22 crc kubenswrapper[4789]: I1124 11:44:22.328217 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d070801e-b0f9-43f1-9521-c3548067d7cb-utilities" (OuterVolumeSpecName: "utilities") pod "d070801e-b0f9-43f1-9521-c3548067d7cb" (UID: "d070801e-b0f9-43f1-9521-c3548067d7cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:44:22 crc kubenswrapper[4789]: I1124 11:44:22.334848 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d070801e-b0f9-43f1-9521-c3548067d7cb-kube-api-access-mspkk" (OuterVolumeSpecName: "kube-api-access-mspkk") pod "d070801e-b0f9-43f1-9521-c3548067d7cb" (UID: "d070801e-b0f9-43f1-9521-c3548067d7cb"). InnerVolumeSpecName "kube-api-access-mspkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:44:22 crc kubenswrapper[4789]: I1124 11:44:22.373873 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d070801e-b0f9-43f1-9521-c3548067d7cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d070801e-b0f9-43f1-9521-c3548067d7cb" (UID: "d070801e-b0f9-43f1-9521-c3548067d7cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:44:22 crc kubenswrapper[4789]: I1124 11:44:22.428942 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d070801e-b0f9-43f1-9521-c3548067d7cb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:44:22 crc kubenswrapper[4789]: I1124 11:44:22.428970 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mspkk\" (UniqueName: \"kubernetes.io/projected/d070801e-b0f9-43f1-9521-c3548067d7cb-kube-api-access-mspkk\") on node \"crc\" DevicePath \"\"" Nov 24 11:44:22 crc kubenswrapper[4789]: I1124 11:44:22.428982 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d070801e-b0f9-43f1-9521-c3548067d7cb-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:44:22 crc kubenswrapper[4789]: I1124 11:44:22.684943 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gsg89" event={"ID":"d070801e-b0f9-43f1-9521-c3548067d7cb","Type":"ContainerDied","Data":"190b9c9d263e5582528d1207442df5016b00a83dbbf687f6491652bdf9a54099"} Nov 24 11:44:22 crc kubenswrapper[4789]: I1124 11:44:22.684993 4789 scope.go:117] "RemoveContainer" containerID="451ae45856987934fdeb4925b119ed33cc2eff217ade7b9eb5674bc94d9bddbf" Nov 24 11:44:22 crc kubenswrapper[4789]: I1124 11:44:22.685011 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gsg89" Nov 24 11:44:22 crc kubenswrapper[4789]: I1124 11:44:22.715166 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gsg89"] Nov 24 11:44:22 crc kubenswrapper[4789]: I1124 11:44:22.721765 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gsg89"] Nov 24 11:44:24 crc kubenswrapper[4789]: I1124 11:44:24.181131 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d070801e-b0f9-43f1-9521-c3548067d7cb" path="/var/lib/kubelet/pods/d070801e-b0f9-43f1-9521-c3548067d7cb/volumes" Nov 24 11:44:27 crc kubenswrapper[4789]: I1124 11:44:27.506847 4789 scope.go:117] "RemoveContainer" containerID="e3ac42f981d3618e3c1e2ce6f71ba408263a77528ea323adb1decb53540bfba2" Nov 24 11:44:27 crc kubenswrapper[4789]: I1124 11:44:27.666613 4789 scope.go:117] "RemoveContainer" containerID="59d45f61724aec867a0bfd40993883ab7b70d0c2c62ee1dfb5b0471092d84d99" Nov 24 11:44:27 crc kubenswrapper[4789]: E1124 11:44:27.877261 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-hxrfg" podUID="74fd2f2b-e4c9-465b-928f-adbe316321a4" Nov 24 11:44:27 crc kubenswrapper[4789]: E1124 11:44:27.878602 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-58f887965d-kjb9s" podUID="01a1d054-85ac-46b5-94f1-7ec657e0658f" Nov 24 11:44:27 crc kubenswrapper[4789]: E1124 11:44:27.952005 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-jk4w9" podUID="97d7da9b-f14e-4d8b-9ab0-5607a2a556cf" Nov 24 11:44:27 crc kubenswrapper[4789]: E1124 11:44:27.972288 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-65j74" podUID="f1cdfa4d-b1e5-48c3-b4d7-1b044bfe9592" Nov 24 11:44:27 crc kubenswrapper[4789]: E1124 11:44:27.972751 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-7969689c84-mt9mk" podUID="f0a7631e-95a4-4bb8-aa13-72b02c833aba" Nov 24 11:44:28 crc kubenswrapper[4789]: E1124 11:44:28.057132 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-9vtqg" podUID="92381aad-0739-4a44-948f-c7dc91808a89" Nov 24 11:44:28 crc kubenswrapper[4789]: E1124 11:44:28.312303 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-vrsx6" podUID="95a81c85-d5ed-49a2-a24d-1aa8f5ed1aef" Nov 24 11:44:28 crc kubenswrapper[4789]: I1124 11:44:28.763634 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-tf44z" event={"ID":"40d059bb-9e0e-4bba-bea5-866a064bb150","Type":"ContainerStarted","Data":"781e7427a0dc6ae69bd2113fa26ed392160eacda447e91535f564d2aebe2c20e"} Nov 24 11:44:28 crc kubenswrapper[4789]: I1124 11:44:28.764131 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-tf44z" Nov 24 11:44:28 crc kubenswrapper[4789]: I1124 11:44:28.769356 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-4n8q6" event={"ID":"0b73227d-0b7b-468c-a0c3-fefa29209aa0","Type":"ContainerStarted","Data":"c1d3b147072f86737c990922e74fa1d156f651fffde71119d524db1c61e5a069"} Nov 24 11:44:28 crc kubenswrapper[4789]: I1124 11:44:28.777949 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-kdkrp" event={"ID":"879f31f8-27f9-4f20-a9cd-b67373fac926","Type":"ContainerStarted","Data":"65df3f5e91e55e7a16962deb3538d520fc169208d98e6fbba4520d141170263b"} Nov 24 11:44:28 crc kubenswrapper[4789]: I1124 11:44:28.789721 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-g4kfx" event={"ID":"915814e7-0e49-4bec-8403-6e95d1008e72","Type":"ContainerStarted","Data":"358633010f19d2edf4580128d2be54c60889d73213fc4f9dbc3ac5471d61e481"} Nov 24 11:44:28 crc kubenswrapper[4789]: I1124 11:44:28.790344 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-g4kfx" Nov 24 11:44:28 crc kubenswrapper[4789]: I1124 11:44:28.804934 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-8xxh4" event={"ID":"42125341-88db-4554-abe6-55807d7d54fa","Type":"ContainerStarted","Data":"bb16b8708087456587ee87fa9ec6045214707bd2e5eebdd20fe071ada3d6be80"} Nov 24 11:44:28 crc kubenswrapper[4789]: I1124 11:44:28.805508 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-8xxh4" Nov 24 11:44:28 crc kubenswrapper[4789]: I1124 11:44:28.824291 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-tf44z" podStartSLOduration=4.668096419 podStartE2EDuration="32.824277531s" podCreationTimestamp="2025-11-24 11:43:56 +0000 UTC" firstStartedPulling="2025-11-24 11:43:59.43436315 +0000 UTC m=+822.016834529" lastFinishedPulling="2025-11-24 11:44:27.590544262 +0000 UTC m=+850.173015641" observedRunningTime="2025-11-24 11:44:28.817074141 +0000 UTC m=+851.399545520" watchObservedRunningTime="2025-11-24 11:44:28.824277531 +0000 UTC m=+851.406748910" Nov 24 11:44:28 crc kubenswrapper[4789]: I1124 11:44:28.827719 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-jwwfg" event={"ID":"a7de15ed-b91f-490d-bc42-e41e929a22d1","Type":"ContainerStarted","Data":"1ff4145cc220ac0aa586fb93ab44575e4f217fc03db6aec64725587d8d0be295"} Nov 24 11:44:28 crc kubenswrapper[4789]: I1124 11:44:28.840542 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-vq62h" event={"ID":"123b4cfb-8a48-4e91-8cb7-20a22b3e6b16","Type":"ContainerStarted","Data":"037e0fdbcdba4636fb20fe5191d81ad010ecee6ff551d51a1a3e0d42f48b6621"} Nov 24 11:44:28 crc kubenswrapper[4789]: I1124 11:44:28.840828 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-vq62h" Nov 24 11:44:28 crc kubenswrapper[4789]: I1124 11:44:28.847452 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-b4c496f69-ttb9w" event={"ID":"56a28c68-0fee-4c04-9461-7f4f4cb166a8","Type":"ContainerStarted","Data":"23a50b11cf5781b6e0e5e448d526842f7f1eaaddc3c8aefce62806170494f003"} Nov 24 11:44:28 crc kubenswrapper[4789]: I1124 11:44:28.848200 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-b4c496f69-ttb9w" Nov 24 11:44:28 crc kubenswrapper[4789]: I1124 11:44:28.852901 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-65j74" event={"ID":"f1cdfa4d-b1e5-48c3-b4d7-1b044bfe9592","Type":"ContainerStarted","Data":"ef1d3779eadcbc61a540adcaf1189c9431de830436d61d2cd4adb717a72703c9"} Nov 24 11:44:28 crc kubenswrapper[4789]: E1124 11:44:28.855503 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-65j74" podUID="f1cdfa4d-b1e5-48c3-b4d7-1b044bfe9592" Nov 24 11:44:28 crc kubenswrapper[4789]: I1124 11:44:28.862283 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-vrsx6" event={"ID":"95a81c85-d5ed-49a2-a24d-1aa8f5ed1aef","Type":"ContainerStarted","Data":"958a8aea530dc29993d748d9e190831a21d91b9c19e26baf28a274331c40fd7e"} Nov 24 11:44:28 crc kubenswrapper[4789]: I1124 11:44:28.870088 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-vcqnx" event={"ID":"d6f07f19-826c-41c8-8861-97ffffe88f6e","Type":"ContainerStarted","Data":"6b482855fc70b00ad35d9a17c1d3f8d212895a9f0c3822f95c31e0244cc2fc05"} Nov 24 11:44:28 crc kubenswrapper[4789]: I1124 11:44:28.880193 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-9vtqg" event={"ID":"92381aad-0739-4a44-948f-c7dc91808a89","Type":"ContainerStarted","Data":"a0974ed42d32b1c806ccc078c322dcb8239e0948619d7eb704e65b92f37ddebb"} Nov 24 11:44:28 crc kubenswrapper[4789]: E1124 11:44:28.882310 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-9vtqg" podUID="92381aad-0739-4a44-948f-c7dc91808a89" Nov 24 11:44:28 crc kubenswrapper[4789]: I1124 11:44:28.894879 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-tfdds" event={"ID":"661a8eee-259e-40e5-83c5-7d5b78981eb5","Type":"ContainerStarted","Data":"65b1eec96bf67bf1aff154db7521072e37923266860a96014a60b786cb6ed02d"} Nov 24 11:44:28 crc kubenswrapper[4789]: I1124 11:44:28.915977 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-5wh6z" event={"ID":"89488e43-e2eb-44a1-ac26-fcb0c87047f6","Type":"ContainerStarted","Data":"284975bbed4477b82bf1404ca4d820260985cc8353d8fcc5e2782bdac44d1b0d"} Nov 24 11:44:28 crc kubenswrapper[4789]: I1124 11:44:28.929785 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7969689c84-mt9mk" event={"ID":"f0a7631e-95a4-4bb8-aa13-72b02c833aba","Type":"ContainerStarted","Data":"ff9fa739b6c2fee360d3e5041607a08a2626f84aaa86ddbe5b6b125358aa8aa5"} Nov 24 11:44:28 crc kubenswrapper[4789]: I1124 11:44:28.963627 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-q5gj6" event={"ID":"d7389a19-508e-48aa-81f3-25fc9fd76fbf","Type":"ContainerStarted","Data":"b052434ec0cd77baa6df4a63456e552e83d8e69709bca26d0d33f21361594337"} Nov 24 11:44:28 crc kubenswrapper[4789]: I1124 11:44:28.990782 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-jk4w9" event={"ID":"97d7da9b-f14e-4d8b-9ab0-5607a2a556cf","Type":"ContainerStarted","Data":"1652f9dec3ce7a3da270b371e551403e536e82f46040b631a995a23599934ac5"} Nov 24 11:44:29 crc kubenswrapper[4789]: I1124 11:44:29.027567 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-hxrfg" event={"ID":"74fd2f2b-e4c9-465b-928f-adbe316321a4","Type":"ContainerStarted","Data":"7ec523fa61a7788299afafb08efd74061c6cf839df4f921eed31004e427ffe8e"} Nov 24 11:44:29 crc kubenswrapper[4789]: I1124 11:44:29.029597 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-g4kfx" podStartSLOduration=5.037278306 podStartE2EDuration="33.029581758s" podCreationTimestamp="2025-11-24 11:43:56 +0000 UTC" firstStartedPulling="2025-11-24 11:43:59.435121738 +0000 UTC m=+822.017593117" lastFinishedPulling="2025-11-24 11:44:27.42742518 +0000 UTC m=+850.009896569" observedRunningTime="2025-11-24 11:44:29.026077508 +0000 UTC m=+851.608548887" watchObservedRunningTime="2025-11-24 11:44:29.029581758 +0000 UTC m=+851.612053137" Nov 24 11:44:29 crc kubenswrapper[4789]: I1124 11:44:29.029964 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-8xxh4" podStartSLOduration=4.800448155 podStartE2EDuration="33.029960131s" podCreationTimestamp="2025-11-24 11:43:56 +0000 UTC" firstStartedPulling="2025-11-24 11:43:59.364534335 +0000 UTC m=+821.947005714" lastFinishedPulling="2025-11-24 11:44:27.594046311 +0000 UTC m=+850.176517690" observedRunningTime="2025-11-24 11:44:28.910784739 +0000 UTC m=+851.493256118" watchObservedRunningTime="2025-11-24 11:44:29.029960131 +0000 UTC m=+851.612431510" Nov 24 11:44:29 crc kubenswrapper[4789]: I1124 11:44:29.031396 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58f887965d-kjb9s" event={"ID":"01a1d054-85ac-46b5-94f1-7ec657e0658f","Type":"ContainerStarted","Data":"86bd1b57f31100c52b90c107536c24bb2050f176c6d82b8fc42f63efc41b83c2"} Nov 24 11:44:29 crc kubenswrapper[4789]: I1124 11:44:29.036512 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-zq9m5" event={"ID":"6a05bbf2-98dc-4086-ac3e-8a8cf5bd7dc9","Type":"ContainerStarted","Data":"344294588b08c1e82ad1c6cfc5d8508377afde55b7808db5f1617fba58778888"} Nov 24 11:44:29 crc kubenswrapper[4789]: I1124 11:44:29.037090 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-zq9m5" Nov 24 11:44:29 crc kubenswrapper[4789]: I1124 11:44:29.057744 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d656998f4-v4frd" event={"ID":"57736f24-6289-42e1-918a-cffd058c0e7a","Type":"ContainerStarted","Data":"02847255936d0e5e1be9ce4971570f73f5542bc7638cfbc3e3e96952fdb5c7db"} Nov 24 11:44:29 crc kubenswrapper[4789]: I1124 11:44:29.058389 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-d656998f4-v4frd" Nov 24 11:44:29 crc kubenswrapper[4789]: I1124 11:44:29.075136 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-djvqp" event={"ID":"553cfbf3-1b3c-4004-9bf9-4b20de969652","Type":"ContainerStarted","Data":"13cca53b6cce00a4b584e0631aa44572398e702f09068e318ec970ea33fb2509"} Nov 24 11:44:29 crc kubenswrapper[4789]: I1124 11:44:29.253660 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-b4c496f69-ttb9w" podStartSLOduration=5.02350706 podStartE2EDuration="33.253646658s" podCreationTimestamp="2025-11-24 11:43:56 +0000 UTC" firstStartedPulling="2025-11-24 11:43:59.364261949 +0000 UTC m=+821.946733328" lastFinishedPulling="2025-11-24 11:44:27.594401547 +0000 UTC m=+850.176872926" observedRunningTime="2025-11-24 11:44:29.250004224 +0000 UTC m=+851.832475613" watchObservedRunningTime="2025-11-24 11:44:29.253646658 +0000 UTC m=+851.836118037" Nov 24 11:44:29 crc kubenswrapper[4789]: I1124 11:44:29.382909 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-vq62h" podStartSLOduration=5.159851605 podStartE2EDuration="33.382892017s" podCreationTimestamp="2025-11-24 11:43:56 +0000 UTC" firstStartedPulling="2025-11-24 11:43:59.371410493 +0000 UTC m=+821.953881872" lastFinishedPulling="2025-11-24 11:44:27.594450905 +0000 UTC m=+850.176922284" observedRunningTime="2025-11-24 11:44:29.382733623 +0000 UTC m=+851.965205002" watchObservedRunningTime="2025-11-24 11:44:29.382892017 +0000 UTC m=+851.965363396" Nov 24 11:44:29 crc kubenswrapper[4789]: I1124 11:44:29.656803 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-d656998f4-v4frd" podStartSLOduration=5.524441488 podStartE2EDuration="33.656786813s" podCreationTimestamp="2025-11-24 11:43:56 +0000 UTC" firstStartedPulling="2025-11-24 11:43:59.390635205 +0000 UTC m=+821.973106574" lastFinishedPulling="2025-11-24 11:44:27.52298052 +0000 UTC m=+850.105451899" observedRunningTime="2025-11-24 11:44:29.626701326 +0000 UTC m=+852.209172715" watchObservedRunningTime="2025-11-24 11:44:29.656786813 +0000 UTC m=+852.239258192" Nov 24 11:44:29 crc kubenswrapper[4789]: I1124 11:44:29.755961 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-zq9m5" podStartSLOduration=11.109051701 podStartE2EDuration="33.755945496s" podCreationTimestamp="2025-11-24 11:43:56 +0000 UTC" firstStartedPulling="2025-11-24 11:43:58.967530048 +0000 UTC m=+821.550001437" lastFinishedPulling="2025-11-24 11:44:21.614423853 +0000 UTC m=+844.196895232" observedRunningTime="2025-11-24 11:44:29.754862413 +0000 UTC m=+852.337333792" watchObservedRunningTime="2025-11-24 11:44:29.755945496 +0000 UTC m=+852.338416875" Nov 24 11:44:30 crc kubenswrapper[4789]: I1124 11:44:30.093544 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58f887965d-kjb9s" event={"ID":"01a1d054-85ac-46b5-94f1-7ec657e0658f","Type":"ContainerStarted","Data":"199e199ee65e39b8f6d91164ffb87c132dac9628be93be0469156d089764f042"} Nov 24 11:44:30 crc kubenswrapper[4789]: I1124 11:44:30.094342 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58f887965d-kjb9s" Nov 24 11:44:30 crc kubenswrapper[4789]: I1124 11:44:30.095722 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-q5gj6" event={"ID":"d7389a19-508e-48aa-81f3-25fc9fd76fbf","Type":"ContainerStarted","Data":"45dcdede73e48dd425d6ae5b55ab00dc861e7df123cb3f2cdfc1991134527209"} Nov 24 11:44:30 crc kubenswrapper[4789]: I1124 11:44:30.096162 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-q5gj6" Nov 24 11:44:30 crc kubenswrapper[4789]: I1124 11:44:30.106593 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-kdkrp" event={"ID":"879f31f8-27f9-4f20-a9cd-b67373fac926","Type":"ContainerStarted","Data":"6728738464d86c2da5d081649d19579dc3fb2697188ba1674cff7de617551df4"} Nov 24 11:44:30 crc kubenswrapper[4789]: I1124 11:44:30.106928 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-kdkrp" Nov 24 11:44:30 crc kubenswrapper[4789]: I1124 11:44:30.110477 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-jk4w9" event={"ID":"97d7da9b-f14e-4d8b-9ab0-5607a2a556cf","Type":"ContainerStarted","Data":"85f717a08dc5833f6c175d7758155bed81bd83ddcc3f4aac2c9ff7115c598b2c"} Nov 24 11:44:30 crc kubenswrapper[4789]: I1124 11:44:30.111143 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-jk4w9" Nov 24 11:44:30 crc kubenswrapper[4789]: I1124 11:44:30.121008 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-vcqnx" event={"ID":"d6f07f19-826c-41c8-8861-97ffffe88f6e","Type":"ContainerStarted","Data":"49281b79aadc89eb5e77cd881305673d73e3f11fc3e912580052a24fb23ac562"} Nov 24 11:44:30 crc kubenswrapper[4789]: I1124 11:44:30.121667 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-vcqnx" Nov 24 11:44:30 crc kubenswrapper[4789]: I1124 11:44:30.134286 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-hxrfg" event={"ID":"74fd2f2b-e4c9-465b-928f-adbe316321a4","Type":"ContainerStarted","Data":"96c62b54c5c3f9cf5b10e917cdd83f4ddca89124367378b467c978a295878dcc"} Nov 24 11:44:30 crc kubenswrapper[4789]: I1124 11:44:30.134448 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-hxrfg" Nov 24 11:44:30 crc kubenswrapper[4789]: I1124 11:44:30.139772 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-djvqp" podStartSLOduration=6.040626522 podStartE2EDuration="34.139757113s" podCreationTimestamp="2025-11-24 11:43:56 +0000 UTC" firstStartedPulling="2025-11-24 11:43:59.434910353 +0000 UTC m=+822.017381732" lastFinishedPulling="2025-11-24 11:44:27.534040934 +0000 UTC m=+850.116512323" observedRunningTime="2025-11-24 11:44:29.829476042 +0000 UTC m=+852.411947431" watchObservedRunningTime="2025-11-24 11:44:30.139757113 +0000 UTC m=+852.722228492" Nov 24 11:44:30 crc kubenswrapper[4789]: I1124 11:44:30.140897 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-58f887965d-kjb9s" podStartSLOduration=3.422675842 podStartE2EDuration="34.140891454s" podCreationTimestamp="2025-11-24 11:43:56 +0000 UTC" firstStartedPulling="2025-11-24 11:43:58.96113344 +0000 UTC m=+821.543604819" lastFinishedPulling="2025-11-24 11:44:29.679349052 +0000 UTC m=+852.261820431" observedRunningTime="2025-11-24 11:44:30.136211035 +0000 UTC m=+852.718682414" watchObservedRunningTime="2025-11-24 11:44:30.140891454 +0000 UTC m=+852.723362833" Nov 24 11:44:30 crc kubenswrapper[4789]: I1124 11:44:30.153106 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-tfdds" event={"ID":"661a8eee-259e-40e5-83c5-7d5b78981eb5","Type":"ContainerStarted","Data":"d185f2623813519a5bb5475a892c6ec5bc54d4cbb50785604b6b7f02d4904fad"} Nov 24 11:44:30 crc kubenswrapper[4789]: I1124 11:44:30.154050 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-tfdds" Nov 24 11:44:30 crc kubenswrapper[4789]: I1124 11:44:30.160176 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-vcqnx" podStartSLOduration=11.95230233 podStartE2EDuration="35.160163884s" podCreationTimestamp="2025-11-24 11:43:55 +0000 UTC" firstStartedPulling="2025-11-24 11:43:58.406578241 +0000 UTC m=+820.989049620" lastFinishedPulling="2025-11-24 11:44:21.614439795 +0000 UTC m=+844.196911174" observedRunningTime="2025-11-24 11:44:30.157001801 +0000 UTC m=+852.739473180" watchObservedRunningTime="2025-11-24 11:44:30.160163884 +0000 UTC m=+852.742635263" Nov 24 11:44:30 crc kubenswrapper[4789]: I1124 11:44:30.184797 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-5wh6z" event={"ID":"89488e43-e2eb-44a1-ac26-fcb0c87047f6","Type":"ContainerStarted","Data":"f81fad67cd107bd52aa47364b5e472a3e98dd8a1e9330a8f88f5191cb2dde9db"} Nov 24 11:44:30 crc kubenswrapper[4789]: I1124 11:44:30.185828 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-zq9m5" event={"ID":"6a05bbf2-98dc-4086-ac3e-8a8cf5bd7dc9","Type":"ContainerStarted","Data":"fe861fcc2be8c35b0c1448c7a6e238e290b6cd3fcd5969ceb8b4a492ed20826d"} Nov 24 11:44:30 crc kubenswrapper[4789]: I1124 11:44:30.192339 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-hxrfg" podStartSLOduration=3.644156621 podStartE2EDuration="34.192320961s" podCreationTimestamp="2025-11-24 11:43:56 +0000 UTC" firstStartedPulling="2025-11-24 11:43:59.046834821 +0000 UTC m=+821.629306200" lastFinishedPulling="2025-11-24 11:44:29.594999171 +0000 UTC m=+852.177470540" observedRunningTime="2025-11-24 11:44:30.182089431 +0000 UTC m=+852.764560810" watchObservedRunningTime="2025-11-24 11:44:30.192320961 +0000 UTC m=+852.774792340" Nov 24 11:44:30 crc kubenswrapper[4789]: I1124 11:44:30.203821 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-vrsx6" event={"ID":"95a81c85-d5ed-49a2-a24d-1aa8f5ed1aef","Type":"ContainerStarted","Data":"8e822836c64e0e2bc70724cbc930bc8b8eff1ca29d9d29a32d13cdfb08a622fc"} Nov 24 11:44:30 crc kubenswrapper[4789]: I1124 11:44:30.204134 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-vrsx6" Nov 24 11:44:30 crc kubenswrapper[4789]: I1124 11:44:30.211009 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-jk4w9" podStartSLOduration=3.554251868 podStartE2EDuration="34.210997448s" podCreationTimestamp="2025-11-24 11:43:56 +0000 UTC" firstStartedPulling="2025-11-24 11:43:58.937127599 +0000 UTC m=+821.519598978" lastFinishedPulling="2025-11-24 11:44:29.593873179 +0000 UTC m=+852.176344558" observedRunningTime="2025-11-24 11:44:30.207921659 +0000 UTC m=+852.790393038" watchObservedRunningTime="2025-11-24 11:44:30.210997448 +0000 UTC m=+852.793468827" Nov 24 11:44:30 crc kubenswrapper[4789]: I1124 11:44:30.225717 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-jwwfg" event={"ID":"a7de15ed-b91f-490d-bc42-e41e929a22d1","Type":"ContainerStarted","Data":"5da6ed37ef3c70bc45ebf788637d8c8e23f38cd8537252b0c420c09ef1a7a58f"} Nov 24 11:44:30 crc kubenswrapper[4789]: I1124 11:44:30.225979 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-jwwfg" Nov 24 11:44:30 crc kubenswrapper[4789]: I1124 11:44:30.238868 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-4n8q6" event={"ID":"0b73227d-0b7b-468c-a0c3-fefa29209aa0","Type":"ContainerStarted","Data":"31328fcc7e492790a66aff5b12f387f79e3c8c42bccf2094c97cb9f72444c994"} Nov 24 11:44:30 crc kubenswrapper[4789]: I1124 11:44:30.239672 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-4n8q6" Nov 24 11:44:30 crc kubenswrapper[4789]: I1124 11:44:30.246061 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-q5gj6" podStartSLOduration=12.678284374 podStartE2EDuration="35.2460427s" podCreationTimestamp="2025-11-24 11:43:55 +0000 UTC" firstStartedPulling="2025-11-24 11:43:59.045264055 +0000 UTC m=+821.627735434" lastFinishedPulling="2025-11-24 11:44:21.613022381 +0000 UTC m=+844.195493760" observedRunningTime="2025-11-24 11:44:30.242707943 +0000 UTC m=+852.825179322" watchObservedRunningTime="2025-11-24 11:44:30.2460427 +0000 UTC m=+852.828514079" Nov 24 11:44:30 crc kubenswrapper[4789]: I1124 11:44:30.276099 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-kdkrp" podStartSLOduration=12.023365362 podStartE2EDuration="34.276076687s" podCreationTimestamp="2025-11-24 11:43:56 +0000 UTC" firstStartedPulling="2025-11-24 11:43:59.360478321 +0000 UTC m=+821.942949690" lastFinishedPulling="2025-11-24 11:44:21.613189636 +0000 UTC m=+844.195661015" observedRunningTime="2025-11-24 11:44:30.268871337 +0000 UTC m=+852.851342716" watchObservedRunningTime="2025-11-24 11:44:30.276076687 +0000 UTC m=+852.858548056" Nov 24 11:44:30 crc kubenswrapper[4789]: I1124 11:44:30.297318 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-4n8q6" podStartSLOduration=11.67445847 podStartE2EDuration="35.297303324s" podCreationTimestamp="2025-11-24 11:43:55 +0000 UTC" firstStartedPulling="2025-11-24 11:43:57.990537966 +0000 UTC m=+820.573009345" lastFinishedPulling="2025-11-24 11:44:21.61338279 +0000 UTC m=+844.195854199" observedRunningTime="2025-11-24 11:44:30.293231019 +0000 UTC m=+852.875702398" watchObservedRunningTime="2025-11-24 11:44:30.297303324 +0000 UTC m=+852.879774693" Nov 24 11:44:30 crc kubenswrapper[4789]: I1124 11:44:30.356765 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-5wh6z" podStartSLOduration=11.656190296 podStartE2EDuration="34.356747987s" podCreationTimestamp="2025-11-24 11:43:56 +0000 UTC" firstStartedPulling="2025-11-24 11:43:58.913969096 +0000 UTC m=+821.496440465" lastFinishedPulling="2025-11-24 11:44:21.614526777 +0000 UTC m=+844.196998156" observedRunningTime="2025-11-24 11:44:30.327738845 +0000 UTC m=+852.910210234" watchObservedRunningTime="2025-11-24 11:44:30.356747987 +0000 UTC m=+852.939219366" Nov 24 11:44:30 crc kubenswrapper[4789]: I1124 11:44:30.396056 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-jwwfg" podStartSLOduration=12.193400973 podStartE2EDuration="34.396042076s" podCreationTimestamp="2025-11-24 11:43:56 +0000 UTC" firstStartedPulling="2025-11-24 11:43:59.411862273 +0000 UTC m=+821.994333652" lastFinishedPulling="2025-11-24 11:44:21.614503376 +0000 UTC m=+844.196974755" observedRunningTime="2025-11-24 11:44:30.393219328 +0000 UTC m=+852.975690707" watchObservedRunningTime="2025-11-24 11:44:30.396042076 +0000 UTC m=+852.978513445" Nov 24 11:44:30 crc kubenswrapper[4789]: I1124 11:44:30.426972 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-tfdds" podStartSLOduration=11.240456153 podStartE2EDuration="34.426950516s" podCreationTimestamp="2025-11-24 11:43:56 +0000 UTC" firstStartedPulling="2025-11-24 11:43:58.428092425 +0000 UTC m=+821.010563804" lastFinishedPulling="2025-11-24 11:44:21.614586788 +0000 UTC m=+844.197058167" observedRunningTime="2025-11-24 11:44:30.422172702 +0000 UTC m=+853.004644081" watchObservedRunningTime="2025-11-24 11:44:30.426950516 +0000 UTC m=+853.009421895" Nov 24 11:44:30 crc kubenswrapper[4789]: I1124 11:44:30.438927 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-vrsx6" podStartSLOduration=3.972562986 podStartE2EDuration="34.438854764s" podCreationTimestamp="2025-11-24 11:43:56 +0000 UTC" firstStartedPulling="2025-11-24 11:43:58.867851686 +0000 UTC m=+821.450323055" lastFinishedPulling="2025-11-24 11:44:29.334143454 +0000 UTC m=+851.916614833" observedRunningTime="2025-11-24 11:44:30.43478247 +0000 UTC m=+853.017253859" watchObservedRunningTime="2025-11-24 11:44:30.438854764 +0000 UTC m=+853.021326143" Nov 24 11:44:31 crc kubenswrapper[4789]: I1124 11:44:31.245187 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-9vtqg" event={"ID":"92381aad-0739-4a44-948f-c7dc91808a89","Type":"ContainerStarted","Data":"5c157d561dbb0dde4b09388f6ddd2c6d6c1a946babcb84b8afa0c3d88ac86828"} Nov 24 11:44:31 crc kubenswrapper[4789]: I1124 11:44:31.245379 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-9vtqg" Nov 24 11:44:31 crc kubenswrapper[4789]: I1124 11:44:31.247672 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7969689c84-mt9mk" event={"ID":"f0a7631e-95a4-4bb8-aa13-72b02c833aba","Type":"ContainerStarted","Data":"79722210edd6e36037af4e4189924b844c9b0a3fb17aed18a7b3a549eb6a6fcf"} Nov 24 11:44:31 crc kubenswrapper[4789]: I1124 11:44:31.247796 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-7969689c84-mt9mk" Nov 24 11:44:31 crc kubenswrapper[4789]: I1124 11:44:31.249425 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-65j74" event={"ID":"f1cdfa4d-b1e5-48c3-b4d7-1b044bfe9592","Type":"ContainerStarted","Data":"d9e8d840ffde1db8dbe9bf81817c46d30d5b1eadf865cfa2109f9c6a4a7e0b34"} Nov 24 11:44:31 crc kubenswrapper[4789]: I1124 11:44:31.250336 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-5wh6z" Nov 24 11:44:31 crc kubenswrapper[4789]: I1124 11:44:31.271595 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-9vtqg" podStartSLOduration=3.4968699819999998 podStartE2EDuration="35.271579576s" podCreationTimestamp="2025-11-24 11:43:56 +0000 UTC" firstStartedPulling="2025-11-24 11:43:58.954205591 +0000 UTC m=+821.536676970" lastFinishedPulling="2025-11-24 11:44:30.728915185 +0000 UTC m=+853.311386564" observedRunningTime="2025-11-24 11:44:31.2698665 +0000 UTC m=+853.852337869" watchObservedRunningTime="2025-11-24 11:44:31.271579576 +0000 UTC m=+853.854050955" Nov 24 11:44:31 crc kubenswrapper[4789]: I1124 11:44:31.292004 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-65j74" podStartSLOduration=3.544663797 podStartE2EDuration="35.291987668s" podCreationTimestamp="2025-11-24 11:43:56 +0000 UTC" firstStartedPulling="2025-11-24 11:43:58.899679527 +0000 UTC m=+821.482150906" lastFinishedPulling="2025-11-24 11:44:30.647003398 +0000 UTC m=+853.229474777" observedRunningTime="2025-11-24 11:44:31.288553075 +0000 UTC m=+853.871024444" watchObservedRunningTime="2025-11-24 11:44:31.291987668 +0000 UTC m=+853.874459047" Nov 24 11:44:31 crc kubenswrapper[4789]: I1124 11:44:31.313129 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-7969689c84-mt9mk" podStartSLOduration=4.011143018 podStartE2EDuration="35.313112729s" podCreationTimestamp="2025-11-24 11:43:56 +0000 UTC" firstStartedPulling="2025-11-24 11:43:58.406337255 +0000 UTC m=+820.988808634" lastFinishedPulling="2025-11-24 11:44:29.708306966 +0000 UTC m=+852.290778345" observedRunningTime="2025-11-24 11:44:31.306760882 +0000 UTC m=+853.889232251" watchObservedRunningTime="2025-11-24 11:44:31.313112729 +0000 UTC m=+853.895584108" Nov 24 11:44:36 crc kubenswrapper[4789]: I1124 11:44:36.476146 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-7969689c84-mt9mk" Nov 24 11:44:36 crc kubenswrapper[4789]: I1124 11:44:36.477864 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-vcqnx" Nov 24 11:44:36 crc kubenswrapper[4789]: I1124 11:44:36.482799 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-4n8q6" Nov 24 11:44:36 crc kubenswrapper[4789]: I1124 11:44:36.492840 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-vrsx6" Nov 24 11:44:36 crc kubenswrapper[4789]: I1124 11:44:36.532511 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-q5gj6" Nov 24 11:44:36 crc kubenswrapper[4789]: I1124 11:44:36.779177 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-hxrfg" Nov 24 11:44:36 crc kubenswrapper[4789]: I1124 11:44:36.835863 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-5wh6z" Nov 24 11:44:36 crc kubenswrapper[4789]: I1124 11:44:36.835922 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-tfdds" Nov 24 11:44:36 crc kubenswrapper[4789]: I1124 11:44:36.902006 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-58f887965d-kjb9s" Nov 24 11:44:36 crc kubenswrapper[4789]: I1124 11:44:36.913939 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-9vtqg" Nov 24 11:44:36 crc kubenswrapper[4789]: I1124 11:44:36.945091 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-65j74" Nov 24 11:44:36 crc kubenswrapper[4789]: I1124 11:44:36.953390 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-65j74" Nov 24 11:44:37 crc kubenswrapper[4789]: I1124 11:44:37.008857 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-zq9m5" Nov 24 11:44:37 crc kubenswrapper[4789]: I1124 11:44:37.020789 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-jk4w9" Nov 24 11:44:37 crc kubenswrapper[4789]: I1124 11:44:37.090506 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-kdkrp" Nov 24 11:44:37 crc kubenswrapper[4789]: I1124 11:44:37.112436 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-g4kfx" Nov 24 11:44:37 crc kubenswrapper[4789]: I1124 11:44:37.113172 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-d656998f4-v4frd" Nov 24 11:44:37 crc kubenswrapper[4789]: I1124 11:44:37.145845 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-8xxh4" Nov 24 11:44:37 crc kubenswrapper[4789]: I1124 11:44:37.180449 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-jwwfg" Nov 24 11:44:37 crc kubenswrapper[4789]: I1124 11:44:37.346081 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-tf44z" Nov 24 11:44:37 crc kubenswrapper[4789]: I1124 11:44:37.469953 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-b4c496f69-ttb9w" Nov 24 11:44:37 crc kubenswrapper[4789]: I1124 11:44:37.635030 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-vq62h" Nov 24 11:44:50 crc kubenswrapper[4789]: I1124 11:44:50.162497 4789 patch_prober.go:28] interesting pod/machine-config-daemon-9czvn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:44:50 crc kubenswrapper[4789]: I1124 11:44:50.163113 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:44:53 crc kubenswrapper[4789]: I1124 11:44:53.054556 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6gndw"] Nov 24 11:44:53 crc kubenswrapper[4789]: E1124 11:44:53.056750 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d81b332f-2cfd-4e55-8a1d-abea95113389" containerName="extract-content" Nov 24 11:44:53 crc kubenswrapper[4789]: I1124 11:44:53.056846 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="d81b332f-2cfd-4e55-8a1d-abea95113389" containerName="extract-content" Nov 24 11:44:53 crc kubenswrapper[4789]: E1124 11:44:53.056998 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d070801e-b0f9-43f1-9521-c3548067d7cb" containerName="extract-content" Nov 24 11:44:53 crc kubenswrapper[4789]: I1124 11:44:53.057055 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="d070801e-b0f9-43f1-9521-c3548067d7cb" containerName="extract-content" Nov 24 11:44:53 crc kubenswrapper[4789]: E1124 11:44:53.057119 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d070801e-b0f9-43f1-9521-c3548067d7cb" containerName="registry-server" Nov 24 11:44:53 crc kubenswrapper[4789]: I1124 11:44:53.057171 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="d070801e-b0f9-43f1-9521-c3548067d7cb" containerName="registry-server" Nov 24 11:44:53 crc kubenswrapper[4789]: E1124 11:44:53.057231 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d81b332f-2cfd-4e55-8a1d-abea95113389" containerName="registry-server" Nov 24 11:44:53 crc kubenswrapper[4789]: I1124 11:44:53.057287 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="d81b332f-2cfd-4e55-8a1d-abea95113389" containerName="registry-server" Nov 24 11:44:53 crc kubenswrapper[4789]: E1124 11:44:53.057345 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d81b332f-2cfd-4e55-8a1d-abea95113389" containerName="extract-utilities" Nov 24 11:44:53 crc kubenswrapper[4789]: I1124 11:44:53.057405 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="d81b332f-2cfd-4e55-8a1d-abea95113389" containerName="extract-utilities" Nov 24 11:44:53 crc kubenswrapper[4789]: E1124 11:44:53.057485 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d070801e-b0f9-43f1-9521-c3548067d7cb" containerName="extract-utilities" Nov 24 11:44:53 crc kubenswrapper[4789]: I1124 11:44:53.057562 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="d070801e-b0f9-43f1-9521-c3548067d7cb" containerName="extract-utilities" Nov 24 11:44:53 crc kubenswrapper[4789]: I1124 11:44:53.057773 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="d070801e-b0f9-43f1-9521-c3548067d7cb" containerName="registry-server" Nov 24 11:44:53 crc kubenswrapper[4789]: I1124 11:44:53.057834 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="d81b332f-2cfd-4e55-8a1d-abea95113389" containerName="registry-server" Nov 24 11:44:53 crc kubenswrapper[4789]: I1124 11:44:53.058562 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-6gndw" Nov 24 11:44:53 crc kubenswrapper[4789]: I1124 11:44:53.062231 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-fhc8j" Nov 24 11:44:53 crc kubenswrapper[4789]: I1124 11:44:53.062501 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 24 11:44:53 crc kubenswrapper[4789]: I1124 11:44:53.062540 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 24 11:44:53 crc kubenswrapper[4789]: I1124 11:44:53.062555 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 24 11:44:53 crc kubenswrapper[4789]: I1124 11:44:53.111147 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd2927f8-d0f3-444c-8d8a-51d76f298b85-config\") pod \"dnsmasq-dns-675f4bcbfc-6gndw\" (UID: \"fd2927f8-d0f3-444c-8d8a-51d76f298b85\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6gndw" Nov 24 11:44:53 crc kubenswrapper[4789]: I1124 11:44:53.111468 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz5kh\" (UniqueName: \"kubernetes.io/projected/fd2927f8-d0f3-444c-8d8a-51d76f298b85-kube-api-access-bz5kh\") pod \"dnsmasq-dns-675f4bcbfc-6gndw\" (UID: \"fd2927f8-d0f3-444c-8d8a-51d76f298b85\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6gndw" Nov 24 11:44:53 crc kubenswrapper[4789]: I1124 11:44:53.118331 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6gndw"] Nov 24 11:44:53 crc kubenswrapper[4789]: I1124 11:44:53.126680 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-87j46"] Nov 24 11:44:53 crc kubenswrapper[4789]: I1124 11:44:53.128289 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-87j46" Nov 24 11:44:53 crc kubenswrapper[4789]: I1124 11:44:53.137381 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 24 11:44:53 crc kubenswrapper[4789]: I1124 11:44:53.145660 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-87j46"] Nov 24 11:44:53 crc kubenswrapper[4789]: I1124 11:44:53.213194 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd2927f8-d0f3-444c-8d8a-51d76f298b85-config\") pod \"dnsmasq-dns-675f4bcbfc-6gndw\" (UID: \"fd2927f8-d0f3-444c-8d8a-51d76f298b85\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6gndw" Nov 24 11:44:53 crc kubenswrapper[4789]: I1124 11:44:53.213700 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz5kh\" (UniqueName: \"kubernetes.io/projected/fd2927f8-d0f3-444c-8d8a-51d76f298b85-kube-api-access-bz5kh\") pod \"dnsmasq-dns-675f4bcbfc-6gndw\" (UID: \"fd2927f8-d0f3-444c-8d8a-51d76f298b85\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6gndw" Nov 24 11:44:53 crc kubenswrapper[4789]: I1124 11:44:53.214296 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd2927f8-d0f3-444c-8d8a-51d76f298b85-config\") pod \"dnsmasq-dns-675f4bcbfc-6gndw\" (UID: \"fd2927f8-d0f3-444c-8d8a-51d76f298b85\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6gndw" Nov 24 11:44:53 crc kubenswrapper[4789]: I1124 11:44:53.235426 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz5kh\" (UniqueName: \"kubernetes.io/projected/fd2927f8-d0f3-444c-8d8a-51d76f298b85-kube-api-access-bz5kh\") pod \"dnsmasq-dns-675f4bcbfc-6gndw\" (UID: \"fd2927f8-d0f3-444c-8d8a-51d76f298b85\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6gndw" Nov 24 11:44:53 crc kubenswrapper[4789]: I1124 11:44:53.314439 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-87j46\" (UID: \"1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-87j46" Nov 24 11:44:53 crc kubenswrapper[4789]: I1124 11:44:53.314855 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2-config\") pod \"dnsmasq-dns-78dd6ddcc-87j46\" (UID: \"1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-87j46" Nov 24 11:44:53 crc kubenswrapper[4789]: I1124 11:44:53.316050 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxzvp\" (UniqueName: \"kubernetes.io/projected/1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2-kube-api-access-mxzvp\") pod \"dnsmasq-dns-78dd6ddcc-87j46\" (UID: \"1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-87j46" Nov 24 11:44:53 crc kubenswrapper[4789]: I1124 11:44:53.374355 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-6gndw" Nov 24 11:44:53 crc kubenswrapper[4789]: I1124 11:44:53.417041 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2-config\") pod \"dnsmasq-dns-78dd6ddcc-87j46\" (UID: \"1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-87j46" Nov 24 11:44:53 crc kubenswrapper[4789]: I1124 11:44:53.417128 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxzvp\" (UniqueName: \"kubernetes.io/projected/1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2-kube-api-access-mxzvp\") pod \"dnsmasq-dns-78dd6ddcc-87j46\" (UID: \"1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-87j46" Nov 24 11:44:53 crc kubenswrapper[4789]: I1124 11:44:53.417208 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-87j46\" (UID: \"1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-87j46" Nov 24 11:44:53 crc kubenswrapper[4789]: I1124 11:44:53.418200 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-87j46\" (UID: \"1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-87j46" Nov 24 11:44:53 crc kubenswrapper[4789]: I1124 11:44:53.418521 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2-config\") pod \"dnsmasq-dns-78dd6ddcc-87j46\" (UID: \"1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-87j46" Nov 24 11:44:53 crc kubenswrapper[4789]: I1124 11:44:53.443807 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxzvp\" (UniqueName: \"kubernetes.io/projected/1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2-kube-api-access-mxzvp\") pod \"dnsmasq-dns-78dd6ddcc-87j46\" (UID: \"1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-87j46" Nov 24 11:44:53 crc kubenswrapper[4789]: I1124 11:44:53.740097 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-87j46" Nov 24 11:44:53 crc kubenswrapper[4789]: I1124 11:44:53.901631 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6gndw"] Nov 24 11:44:54 crc kubenswrapper[4789]: I1124 11:44:54.201956 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-87j46"] Nov 24 11:44:54 crc kubenswrapper[4789]: W1124 11:44:54.213030 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c20dfdf_b0b2_4f8f_aaa8_d4ae97224af2.slice/crio-e4161c2a075b6fba91ea14ba090ec88e921142dd7250c30b08ae7195599e6543 WatchSource:0}: Error finding container e4161c2a075b6fba91ea14ba090ec88e921142dd7250c30b08ae7195599e6543: Status 404 returned error can't find the container with id e4161c2a075b6fba91ea14ba090ec88e921142dd7250c30b08ae7195599e6543 Nov 24 11:44:54 crc kubenswrapper[4789]: I1124 11:44:54.443550 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-87j46" event={"ID":"1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2","Type":"ContainerStarted","Data":"e4161c2a075b6fba91ea14ba090ec88e921142dd7250c30b08ae7195599e6543"} Nov 24 11:44:54 crc kubenswrapper[4789]: I1124 11:44:54.447004 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-6gndw" event={"ID":"fd2927f8-d0f3-444c-8d8a-51d76f298b85","Type":"ContainerStarted","Data":"659bb7355d22c65922af7b900d841434a553404b82813e3f226fbdaa32790d4d"} Nov 24 11:44:56 crc kubenswrapper[4789]: I1124 11:44:56.364530 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6gndw"] Nov 24 11:44:56 crc kubenswrapper[4789]: I1124 11:44:56.387434 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-d85r8"] Nov 24 11:44:56 crc kubenswrapper[4789]: I1124 11:44:56.388597 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-d85r8" Nov 24 11:44:56 crc kubenswrapper[4789]: I1124 11:44:56.425006 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-d85r8"] Nov 24 11:44:56 crc kubenswrapper[4789]: I1124 11:44:56.478251 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12a839b1-6b99-4bc4-a4b1-40db5cd77076-dns-svc\") pod \"dnsmasq-dns-666b6646f7-d85r8\" (UID: \"12a839b1-6b99-4bc4-a4b1-40db5cd77076\") " pod="openstack/dnsmasq-dns-666b6646f7-d85r8" Nov 24 11:44:56 crc kubenswrapper[4789]: I1124 11:44:56.478334 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12a839b1-6b99-4bc4-a4b1-40db5cd77076-config\") pod \"dnsmasq-dns-666b6646f7-d85r8\" (UID: \"12a839b1-6b99-4bc4-a4b1-40db5cd77076\") " pod="openstack/dnsmasq-dns-666b6646f7-d85r8" Nov 24 11:44:56 crc kubenswrapper[4789]: I1124 11:44:56.478439 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j58x5\" (UniqueName: \"kubernetes.io/projected/12a839b1-6b99-4bc4-a4b1-40db5cd77076-kube-api-access-j58x5\") pod \"dnsmasq-dns-666b6646f7-d85r8\" (UID: \"12a839b1-6b99-4bc4-a4b1-40db5cd77076\") " pod="openstack/dnsmasq-dns-666b6646f7-d85r8" Nov 24 11:44:56 crc kubenswrapper[4789]: I1124 11:44:56.579577 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12a839b1-6b99-4bc4-a4b1-40db5cd77076-config\") pod \"dnsmasq-dns-666b6646f7-d85r8\" (UID: \"12a839b1-6b99-4bc4-a4b1-40db5cd77076\") " pod="openstack/dnsmasq-dns-666b6646f7-d85r8" Nov 24 11:44:56 crc kubenswrapper[4789]: I1124 11:44:56.579668 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j58x5\" (UniqueName: \"kubernetes.io/projected/12a839b1-6b99-4bc4-a4b1-40db5cd77076-kube-api-access-j58x5\") pod \"dnsmasq-dns-666b6646f7-d85r8\" (UID: \"12a839b1-6b99-4bc4-a4b1-40db5cd77076\") " pod="openstack/dnsmasq-dns-666b6646f7-d85r8" Nov 24 11:44:56 crc kubenswrapper[4789]: I1124 11:44:56.579710 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12a839b1-6b99-4bc4-a4b1-40db5cd77076-dns-svc\") pod \"dnsmasq-dns-666b6646f7-d85r8\" (UID: \"12a839b1-6b99-4bc4-a4b1-40db5cd77076\") " pod="openstack/dnsmasq-dns-666b6646f7-d85r8" Nov 24 11:44:56 crc kubenswrapper[4789]: I1124 11:44:56.580556 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12a839b1-6b99-4bc4-a4b1-40db5cd77076-dns-svc\") pod \"dnsmasq-dns-666b6646f7-d85r8\" (UID: \"12a839b1-6b99-4bc4-a4b1-40db5cd77076\") " pod="openstack/dnsmasq-dns-666b6646f7-d85r8" Nov 24 11:44:56 crc kubenswrapper[4789]: I1124 11:44:56.580591 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12a839b1-6b99-4bc4-a4b1-40db5cd77076-config\") pod \"dnsmasq-dns-666b6646f7-d85r8\" (UID: \"12a839b1-6b99-4bc4-a4b1-40db5cd77076\") " pod="openstack/dnsmasq-dns-666b6646f7-d85r8" Nov 24 11:44:56 crc kubenswrapper[4789]: I1124 11:44:56.616048 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j58x5\" (UniqueName: \"kubernetes.io/projected/12a839b1-6b99-4bc4-a4b1-40db5cd77076-kube-api-access-j58x5\") pod \"dnsmasq-dns-666b6646f7-d85r8\" (UID: \"12a839b1-6b99-4bc4-a4b1-40db5cd77076\") " pod="openstack/dnsmasq-dns-666b6646f7-d85r8" Nov 24 11:44:56 crc kubenswrapper[4789]: I1124 11:44:56.709986 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-87j46"] Nov 24 11:44:56 crc kubenswrapper[4789]: I1124 11:44:56.713126 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-d85r8" Nov 24 11:44:56 crc kubenswrapper[4789]: I1124 11:44:56.746404 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2xtcq"] Nov 24 11:44:56 crc kubenswrapper[4789]: I1124 11:44:56.747546 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2xtcq" Nov 24 11:44:56 crc kubenswrapper[4789]: I1124 11:44:56.758904 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2xtcq"] Nov 24 11:44:56 crc kubenswrapper[4789]: I1124 11:44:56.782514 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7j5x\" (UniqueName: \"kubernetes.io/projected/a67a3b5d-1c99-4caa-8d70-f65c7b1926a1-kube-api-access-k7j5x\") pod \"dnsmasq-dns-57d769cc4f-2xtcq\" (UID: \"a67a3b5d-1c99-4caa-8d70-f65c7b1926a1\") " pod="openstack/dnsmasq-dns-57d769cc4f-2xtcq" Nov 24 11:44:56 crc kubenswrapper[4789]: I1124 11:44:56.782562 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a67a3b5d-1c99-4caa-8d70-f65c7b1926a1-config\") pod \"dnsmasq-dns-57d769cc4f-2xtcq\" (UID: \"a67a3b5d-1c99-4caa-8d70-f65c7b1926a1\") " pod="openstack/dnsmasq-dns-57d769cc4f-2xtcq" Nov 24 11:44:56 crc kubenswrapper[4789]: I1124 11:44:56.782641 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a67a3b5d-1c99-4caa-8d70-f65c7b1926a1-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-2xtcq\" (UID: \"a67a3b5d-1c99-4caa-8d70-f65c7b1926a1\") " pod="openstack/dnsmasq-dns-57d769cc4f-2xtcq" Nov 24 11:44:56 crc kubenswrapper[4789]: I1124 11:44:56.884185 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a67a3b5d-1c99-4caa-8d70-f65c7b1926a1-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-2xtcq\" (UID: \"a67a3b5d-1c99-4caa-8d70-f65c7b1926a1\") " pod="openstack/dnsmasq-dns-57d769cc4f-2xtcq" Nov 24 11:44:56 crc kubenswrapper[4789]: I1124 11:44:56.884269 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7j5x\" (UniqueName: \"kubernetes.io/projected/a67a3b5d-1c99-4caa-8d70-f65c7b1926a1-kube-api-access-k7j5x\") pod \"dnsmasq-dns-57d769cc4f-2xtcq\" (UID: \"a67a3b5d-1c99-4caa-8d70-f65c7b1926a1\") " pod="openstack/dnsmasq-dns-57d769cc4f-2xtcq" Nov 24 11:44:56 crc kubenswrapper[4789]: I1124 11:44:56.884296 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a67a3b5d-1c99-4caa-8d70-f65c7b1926a1-config\") pod \"dnsmasq-dns-57d769cc4f-2xtcq\" (UID: \"a67a3b5d-1c99-4caa-8d70-f65c7b1926a1\") " pod="openstack/dnsmasq-dns-57d769cc4f-2xtcq" Nov 24 11:44:56 crc kubenswrapper[4789]: I1124 11:44:56.885431 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a67a3b5d-1c99-4caa-8d70-f65c7b1926a1-config\") pod \"dnsmasq-dns-57d769cc4f-2xtcq\" (UID: \"a67a3b5d-1c99-4caa-8d70-f65c7b1926a1\") " pod="openstack/dnsmasq-dns-57d769cc4f-2xtcq" Nov 24 11:44:56 crc kubenswrapper[4789]: I1124 11:44:56.885514 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a67a3b5d-1c99-4caa-8d70-f65c7b1926a1-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-2xtcq\" (UID: \"a67a3b5d-1c99-4caa-8d70-f65c7b1926a1\") " pod="openstack/dnsmasq-dns-57d769cc4f-2xtcq" Nov 24 11:44:56 crc kubenswrapper[4789]: I1124 11:44:56.914277 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7j5x\" (UniqueName: \"kubernetes.io/projected/a67a3b5d-1c99-4caa-8d70-f65c7b1926a1-kube-api-access-k7j5x\") pod \"dnsmasq-dns-57d769cc4f-2xtcq\" (UID: \"a67a3b5d-1c99-4caa-8d70-f65c7b1926a1\") " pod="openstack/dnsmasq-dns-57d769cc4f-2xtcq" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.140489 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2xtcq" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.307528 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-d85r8"] Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.633368 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.634610 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.636855 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.637022 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.637167 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.637300 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.639124 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-vvrch" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.643630 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.644201 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.691659 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.798634 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " pod="openstack/rabbitmq-server-0" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.798685 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " pod="openstack/rabbitmq-server-0" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.798712 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " pod="openstack/rabbitmq-server-0" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.798730 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " pod="openstack/rabbitmq-server-0" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.798906 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " pod="openstack/rabbitmq-server-0" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.798998 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " pod="openstack/rabbitmq-server-0" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.799156 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " pod="openstack/rabbitmq-server-0" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.799193 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v46pb\" (UniqueName: \"kubernetes.io/projected/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-kube-api-access-v46pb\") pod \"rabbitmq-server-0\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " pod="openstack/rabbitmq-server-0" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.799240 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-config-data\") pod \"rabbitmq-server-0\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " pod="openstack/rabbitmq-server-0" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.799290 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " pod="openstack/rabbitmq-server-0" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.799332 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " pod="openstack/rabbitmq-server-0" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.900909 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-config-data\") pod \"rabbitmq-server-0\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " pod="openstack/rabbitmq-server-0" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.900953 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " pod="openstack/rabbitmq-server-0" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.900974 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " pod="openstack/rabbitmq-server-0" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.901011 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " pod="openstack/rabbitmq-server-0" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.901033 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " pod="openstack/rabbitmq-server-0" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.901053 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " pod="openstack/rabbitmq-server-0" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.901125 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " pod="openstack/rabbitmq-server-0" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.901153 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " pod="openstack/rabbitmq-server-0" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.901177 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " pod="openstack/rabbitmq-server-0" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.901220 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " pod="openstack/rabbitmq-server-0" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.901236 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v46pb\" (UniqueName: \"kubernetes.io/projected/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-kube-api-access-v46pb\") pod \"rabbitmq-server-0\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " pod="openstack/rabbitmq-server-0" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.901729 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-config-data\") pod \"rabbitmq-server-0\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " pod="openstack/rabbitmq-server-0" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.901968 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " pod="openstack/rabbitmq-server-0" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.902788 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " pod="openstack/rabbitmq-server-0" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.902999 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " pod="openstack/rabbitmq-server-0" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.903125 4789 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-server-0" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.906146 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " pod="openstack/rabbitmq-server-0" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.907949 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.909366 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.912499 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.913485 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-h2b58" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.913636 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.913781 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.913908 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.913993 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.914102 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.914238 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " pod="openstack/rabbitmq-server-0" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.917632 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.953721 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " pod="openstack/rabbitmq-server-0" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.954396 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " pod="openstack/rabbitmq-server-0" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.954983 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v46pb\" (UniqueName: \"kubernetes.io/projected/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-kube-api-access-v46pb\") pod \"rabbitmq-server-0\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " pod="openstack/rabbitmq-server-0" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.985111 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " pod="openstack/rabbitmq-server-0" Nov 24 11:44:57 crc kubenswrapper[4789]: I1124 11:44:57.996531 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " pod="openstack/rabbitmq-server-0" Nov 24 11:44:58 crc kubenswrapper[4789]: I1124 11:44:58.105495 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ad2c0f97-8696-425d-bd5a-42a24bee8297-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:44:58 crc kubenswrapper[4789]: I1124 11:44:58.105570 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ad2c0f97-8696-425d-bd5a-42a24bee8297-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:44:58 crc kubenswrapper[4789]: I1124 11:44:58.105599 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ad2c0f97-8696-425d-bd5a-42a24bee8297-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:44:58 crc kubenswrapper[4789]: I1124 11:44:58.105681 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:44:58 crc kubenswrapper[4789]: I1124 11:44:58.105723 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ad2c0f97-8696-425d-bd5a-42a24bee8297-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:44:58 crc kubenswrapper[4789]: I1124 11:44:58.105752 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ad2c0f97-8696-425d-bd5a-42a24bee8297-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:44:58 crc kubenswrapper[4789]: I1124 11:44:58.105780 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ad2c0f97-8696-425d-bd5a-42a24bee8297-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:44:58 crc kubenswrapper[4789]: I1124 11:44:58.105821 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n749d\" (UniqueName: \"kubernetes.io/projected/ad2c0f97-8696-425d-bd5a-42a24bee8297-kube-api-access-n749d\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:44:58 crc kubenswrapper[4789]: I1124 11:44:58.105843 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ad2c0f97-8696-425d-bd5a-42a24bee8297-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:44:58 crc kubenswrapper[4789]: I1124 11:44:58.105902 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ad2c0f97-8696-425d-bd5a-42a24bee8297-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:44:58 crc kubenswrapper[4789]: I1124 11:44:58.105919 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ad2c0f97-8696-425d-bd5a-42a24bee8297-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:44:58 crc kubenswrapper[4789]: I1124 11:44:58.207323 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ad2c0f97-8696-425d-bd5a-42a24bee8297-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:44:58 crc kubenswrapper[4789]: I1124 11:44:58.207378 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ad2c0f97-8696-425d-bd5a-42a24bee8297-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:44:58 crc kubenswrapper[4789]: I1124 11:44:58.207398 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ad2c0f97-8696-425d-bd5a-42a24bee8297-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:44:58 crc kubenswrapper[4789]: I1124 11:44:58.207428 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ad2c0f97-8696-425d-bd5a-42a24bee8297-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:44:58 crc kubenswrapper[4789]: I1124 11:44:58.207452 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ad2c0f97-8696-425d-bd5a-42a24bee8297-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:44:58 crc kubenswrapper[4789]: I1124 11:44:58.207490 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ad2c0f97-8696-425d-bd5a-42a24bee8297-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:44:58 crc kubenswrapper[4789]: I1124 11:44:58.207522 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:44:58 crc kubenswrapper[4789]: I1124 11:44:58.207536 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ad2c0f97-8696-425d-bd5a-42a24bee8297-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:44:58 crc kubenswrapper[4789]: I1124 11:44:58.207559 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ad2c0f97-8696-425d-bd5a-42a24bee8297-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:44:58 crc kubenswrapper[4789]: I1124 11:44:58.207584 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ad2c0f97-8696-425d-bd5a-42a24bee8297-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:44:58 crc kubenswrapper[4789]: I1124 11:44:58.207599 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n749d\" (UniqueName: \"kubernetes.io/projected/ad2c0f97-8696-425d-bd5a-42a24bee8297-kube-api-access-n749d\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:44:58 crc kubenswrapper[4789]: I1124 11:44:58.208831 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ad2c0f97-8696-425d-bd5a-42a24bee8297-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:44:58 crc kubenswrapper[4789]: I1124 11:44:58.209089 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ad2c0f97-8696-425d-bd5a-42a24bee8297-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:44:58 crc kubenswrapper[4789]: I1124 11:44:58.209288 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ad2c0f97-8696-425d-bd5a-42a24bee8297-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:44:58 crc kubenswrapper[4789]: I1124 11:44:58.209301 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ad2c0f97-8696-425d-bd5a-42a24bee8297-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:44:58 crc kubenswrapper[4789]: I1124 11:44:58.210599 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ad2c0f97-8696-425d-bd5a-42a24bee8297-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:44:58 crc kubenswrapper[4789]: I1124 11:44:58.213909 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ad2c0f97-8696-425d-bd5a-42a24bee8297-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:44:58 crc kubenswrapper[4789]: I1124 11:44:58.213993 4789 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:44:58 crc kubenswrapper[4789]: I1124 11:44:58.230969 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ad2c0f97-8696-425d-bd5a-42a24bee8297-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:44:58 crc kubenswrapper[4789]: I1124 11:44:58.252207 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ad2c0f97-8696-425d-bd5a-42a24bee8297-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:44:58 crc kubenswrapper[4789]: I1124 11:44:58.257440 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n749d\" (UniqueName: \"kubernetes.io/projected/ad2c0f97-8696-425d-bd5a-42a24bee8297-kube-api-access-n749d\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:44:58 crc kubenswrapper[4789]: I1124 11:44:58.258112 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ad2c0f97-8696-425d-bd5a-42a24bee8297-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:44:58 crc kubenswrapper[4789]: I1124 11:44:58.276271 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 11:44:58 crc kubenswrapper[4789]: I1124 11:44:58.286122 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:44:58 crc kubenswrapper[4789]: I1124 11:44:58.350802 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:44:59 crc kubenswrapper[4789]: I1124 11:44:59.339824 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 24 11:44:59 crc kubenswrapper[4789]: I1124 11:44:59.341263 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 24 11:44:59 crc kubenswrapper[4789]: I1124 11:44:59.347673 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 24 11:44:59 crc kubenswrapper[4789]: I1124 11:44:59.347774 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-vx5tf" Nov 24 11:44:59 crc kubenswrapper[4789]: I1124 11:44:59.347831 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 24 11:44:59 crc kubenswrapper[4789]: I1124 11:44:59.348015 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 24 11:44:59 crc kubenswrapper[4789]: I1124 11:44:59.359925 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 24 11:44:59 crc kubenswrapper[4789]: I1124 11:44:59.362574 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 24 11:44:59 crc kubenswrapper[4789]: I1124 11:44:59.435124 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6236001-96b0-4425-9f1f-eb84778d290a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e6236001-96b0-4425-9f1f-eb84778d290a\") " pod="openstack/openstack-galera-0" Nov 24 11:44:59 crc kubenswrapper[4789]: I1124 11:44:59.435395 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e6236001-96b0-4425-9f1f-eb84778d290a-kolla-config\") pod \"openstack-galera-0\" (UID: \"e6236001-96b0-4425-9f1f-eb84778d290a\") " pod="openstack/openstack-galera-0" Nov 24 11:44:59 crc kubenswrapper[4789]: I1124 11:44:59.435510 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e6236001-96b0-4425-9f1f-eb84778d290a-config-data-default\") pod \"openstack-galera-0\" (UID: \"e6236001-96b0-4425-9f1f-eb84778d290a\") " pod="openstack/openstack-galera-0" Nov 24 11:44:59 crc kubenswrapper[4789]: I1124 11:44:59.435576 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6236001-96b0-4425-9f1f-eb84778d290a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e6236001-96b0-4425-9f1f-eb84778d290a\") " pod="openstack/openstack-galera-0" Nov 24 11:44:59 crc kubenswrapper[4789]: I1124 11:44:59.435665 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e6236001-96b0-4425-9f1f-eb84778d290a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e6236001-96b0-4425-9f1f-eb84778d290a\") " pod="openstack/openstack-galera-0" Nov 24 11:44:59 crc kubenswrapper[4789]: I1124 11:44:59.435801 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6236001-96b0-4425-9f1f-eb84778d290a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e6236001-96b0-4425-9f1f-eb84778d290a\") " pod="openstack/openstack-galera-0" Nov 24 11:44:59 crc kubenswrapper[4789]: I1124 11:44:59.435832 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlnx6\" (UniqueName: \"kubernetes.io/projected/e6236001-96b0-4425-9f1f-eb84778d290a-kube-api-access-rlnx6\") pod \"openstack-galera-0\" (UID: \"e6236001-96b0-4425-9f1f-eb84778d290a\") " pod="openstack/openstack-galera-0" Nov 24 11:44:59 crc kubenswrapper[4789]: I1124 11:44:59.435864 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"e6236001-96b0-4425-9f1f-eb84778d290a\") " pod="openstack/openstack-galera-0" Nov 24 11:44:59 crc kubenswrapper[4789]: I1124 11:44:59.539161 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6236001-96b0-4425-9f1f-eb84778d290a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e6236001-96b0-4425-9f1f-eb84778d290a\") " pod="openstack/openstack-galera-0" Nov 24 11:44:59 crc kubenswrapper[4789]: I1124 11:44:59.539240 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlnx6\" (UniqueName: \"kubernetes.io/projected/e6236001-96b0-4425-9f1f-eb84778d290a-kube-api-access-rlnx6\") pod \"openstack-galera-0\" (UID: \"e6236001-96b0-4425-9f1f-eb84778d290a\") " pod="openstack/openstack-galera-0" Nov 24 11:44:59 crc kubenswrapper[4789]: I1124 11:44:59.539282 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"e6236001-96b0-4425-9f1f-eb84778d290a\") " pod="openstack/openstack-galera-0" Nov 24 11:44:59 crc kubenswrapper[4789]: I1124 11:44:59.539363 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6236001-96b0-4425-9f1f-eb84778d290a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e6236001-96b0-4425-9f1f-eb84778d290a\") " pod="openstack/openstack-galera-0" Nov 24 11:44:59 crc kubenswrapper[4789]: I1124 11:44:59.539424 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e6236001-96b0-4425-9f1f-eb84778d290a-kolla-config\") pod \"openstack-galera-0\" (UID: \"e6236001-96b0-4425-9f1f-eb84778d290a\") " pod="openstack/openstack-galera-0" Nov 24 11:44:59 crc kubenswrapper[4789]: I1124 11:44:59.539521 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e6236001-96b0-4425-9f1f-eb84778d290a-config-data-default\") pod \"openstack-galera-0\" (UID: \"e6236001-96b0-4425-9f1f-eb84778d290a\") " pod="openstack/openstack-galera-0" Nov 24 11:44:59 crc kubenswrapper[4789]: I1124 11:44:59.539562 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6236001-96b0-4425-9f1f-eb84778d290a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e6236001-96b0-4425-9f1f-eb84778d290a\") " pod="openstack/openstack-galera-0" Nov 24 11:44:59 crc kubenswrapper[4789]: I1124 11:44:59.539599 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e6236001-96b0-4425-9f1f-eb84778d290a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e6236001-96b0-4425-9f1f-eb84778d290a\") " pod="openstack/openstack-galera-0" Nov 24 11:44:59 crc kubenswrapper[4789]: I1124 11:44:59.540379 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e6236001-96b0-4425-9f1f-eb84778d290a-kolla-config\") pod \"openstack-galera-0\" (UID: \"e6236001-96b0-4425-9f1f-eb84778d290a\") " pod="openstack/openstack-galera-0" Nov 24 11:44:59 crc kubenswrapper[4789]: I1124 11:44:59.540555 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e6236001-96b0-4425-9f1f-eb84778d290a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e6236001-96b0-4425-9f1f-eb84778d290a\") " pod="openstack/openstack-galera-0" Nov 24 11:44:59 crc kubenswrapper[4789]: I1124 11:44:59.540764 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e6236001-96b0-4425-9f1f-eb84778d290a-config-data-default\") pod \"openstack-galera-0\" (UID: \"e6236001-96b0-4425-9f1f-eb84778d290a\") " pod="openstack/openstack-galera-0" Nov 24 11:44:59 crc kubenswrapper[4789]: I1124 11:44:59.541023 4789 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"e6236001-96b0-4425-9f1f-eb84778d290a\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Nov 24 11:44:59 crc kubenswrapper[4789]: I1124 11:44:59.541548 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6236001-96b0-4425-9f1f-eb84778d290a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e6236001-96b0-4425-9f1f-eb84778d290a\") " pod="openstack/openstack-galera-0" Nov 24 11:44:59 crc kubenswrapper[4789]: I1124 11:44:59.558137 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6236001-96b0-4425-9f1f-eb84778d290a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e6236001-96b0-4425-9f1f-eb84778d290a\") " pod="openstack/openstack-galera-0" Nov 24 11:44:59 crc kubenswrapper[4789]: I1124 11:44:59.559924 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6236001-96b0-4425-9f1f-eb84778d290a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e6236001-96b0-4425-9f1f-eb84778d290a\") " pod="openstack/openstack-galera-0" Nov 24 11:44:59 crc kubenswrapper[4789]: I1124 11:44:59.570745 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlnx6\" (UniqueName: \"kubernetes.io/projected/e6236001-96b0-4425-9f1f-eb84778d290a-kube-api-access-rlnx6\") pod \"openstack-galera-0\" (UID: \"e6236001-96b0-4425-9f1f-eb84778d290a\") " pod="openstack/openstack-galera-0" Nov 24 11:44:59 crc kubenswrapper[4789]: I1124 11:44:59.590801 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"e6236001-96b0-4425-9f1f-eb84778d290a\") " pod="openstack/openstack-galera-0" Nov 24 11:44:59 crc kubenswrapper[4789]: I1124 11:44:59.690237 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 24 11:45:00 crc kubenswrapper[4789]: I1124 11:45:00.139770 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399745-zjhkb"] Nov 24 11:45:00 crc kubenswrapper[4789]: I1124 11:45:00.148792 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399745-zjhkb"] Nov 24 11:45:00 crc kubenswrapper[4789]: I1124 11:45:00.148877 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-zjhkb" Nov 24 11:45:00 crc kubenswrapper[4789]: I1124 11:45:00.153109 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 11:45:00 crc kubenswrapper[4789]: I1124 11:45:00.153249 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 11:45:00 crc kubenswrapper[4789]: I1124 11:45:00.249150 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dec57c49-8f33-4945-902f-bc30c4f577a7-config-volume\") pod \"collect-profiles-29399745-zjhkb\" (UID: \"dec57c49-8f33-4945-902f-bc30c4f577a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-zjhkb" Nov 24 11:45:00 crc kubenswrapper[4789]: I1124 11:45:00.249223 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dec57c49-8f33-4945-902f-bc30c4f577a7-secret-volume\") pod \"collect-profiles-29399745-zjhkb\" (UID: \"dec57c49-8f33-4945-902f-bc30c4f577a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-zjhkb" Nov 24 11:45:00 crc kubenswrapper[4789]: I1124 11:45:00.249284 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h4lr\" (UniqueName: \"kubernetes.io/projected/dec57c49-8f33-4945-902f-bc30c4f577a7-kube-api-access-9h4lr\") pod \"collect-profiles-29399745-zjhkb\" (UID: \"dec57c49-8f33-4945-902f-bc30c4f577a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-zjhkb" Nov 24 11:45:00 crc kubenswrapper[4789]: I1124 11:45:00.350185 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dec57c49-8f33-4945-902f-bc30c4f577a7-config-volume\") pod \"collect-profiles-29399745-zjhkb\" (UID: \"dec57c49-8f33-4945-902f-bc30c4f577a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-zjhkb" Nov 24 11:45:00 crc kubenswrapper[4789]: I1124 11:45:00.350230 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dec57c49-8f33-4945-902f-bc30c4f577a7-secret-volume\") pod \"collect-profiles-29399745-zjhkb\" (UID: \"dec57c49-8f33-4945-902f-bc30c4f577a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-zjhkb" Nov 24 11:45:00 crc kubenswrapper[4789]: I1124 11:45:00.350268 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9h4lr\" (UniqueName: \"kubernetes.io/projected/dec57c49-8f33-4945-902f-bc30c4f577a7-kube-api-access-9h4lr\") pod \"collect-profiles-29399745-zjhkb\" (UID: \"dec57c49-8f33-4945-902f-bc30c4f577a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-zjhkb" Nov 24 11:45:00 crc kubenswrapper[4789]: I1124 11:45:00.351930 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dec57c49-8f33-4945-902f-bc30c4f577a7-config-volume\") pod \"collect-profiles-29399745-zjhkb\" (UID: \"dec57c49-8f33-4945-902f-bc30c4f577a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-zjhkb" Nov 24 11:45:00 crc kubenswrapper[4789]: I1124 11:45:00.365286 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dec57c49-8f33-4945-902f-bc30c4f577a7-secret-volume\") pod \"collect-profiles-29399745-zjhkb\" (UID: \"dec57c49-8f33-4945-902f-bc30c4f577a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-zjhkb" Nov 24 11:45:00 crc kubenswrapper[4789]: I1124 11:45:00.370067 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h4lr\" (UniqueName: \"kubernetes.io/projected/dec57c49-8f33-4945-902f-bc30c4f577a7-kube-api-access-9h4lr\") pod \"collect-profiles-29399745-zjhkb\" (UID: \"dec57c49-8f33-4945-902f-bc30c4f577a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-zjhkb" Nov 24 11:45:00 crc kubenswrapper[4789]: I1124 11:45:00.486157 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-zjhkb" Nov 24 11:45:00 crc kubenswrapper[4789]: I1124 11:45:00.871244 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 11:45:00 crc kubenswrapper[4789]: I1124 11:45:00.873918 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 24 11:45:00 crc kubenswrapper[4789]: I1124 11:45:00.878799 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 24 11:45:00 crc kubenswrapper[4789]: I1124 11:45:00.879436 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 24 11:45:00 crc kubenswrapper[4789]: I1124 11:45:00.883572 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 24 11:45:00 crc kubenswrapper[4789]: I1124 11:45:00.887303 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-c599w" Nov 24 11:45:00 crc kubenswrapper[4789]: I1124 11:45:00.889324 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 11:45:00 crc kubenswrapper[4789]: I1124 11:45:00.961228 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9f6dd80c-3e9a-4ee6-83f8-40195165ec1c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:45:00 crc kubenswrapper[4789]: I1124 11:45:00.961274 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6j72\" (UniqueName: \"kubernetes.io/projected/9f6dd80c-3e9a-4ee6-83f8-40195165ec1c-kube-api-access-g6j72\") pod \"openstack-cell1-galera-0\" (UID: \"9f6dd80c-3e9a-4ee6-83f8-40195165ec1c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:45:00 crc kubenswrapper[4789]: I1124 11:45:00.961307 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9f6dd80c-3e9a-4ee6-83f8-40195165ec1c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9f6dd80c-3e9a-4ee6-83f8-40195165ec1c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:45:00 crc kubenswrapper[4789]: I1124 11:45:00.961328 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f6dd80c-3e9a-4ee6-83f8-40195165ec1c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9f6dd80c-3e9a-4ee6-83f8-40195165ec1c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:45:00 crc kubenswrapper[4789]: I1124 11:45:00.961350 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9f6dd80c-3e9a-4ee6-83f8-40195165ec1c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9f6dd80c-3e9a-4ee6-83f8-40195165ec1c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:45:00 crc kubenswrapper[4789]: I1124 11:45:00.961402 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f6dd80c-3e9a-4ee6-83f8-40195165ec1c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9f6dd80c-3e9a-4ee6-83f8-40195165ec1c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:45:00 crc kubenswrapper[4789]: I1124 11:45:00.961419 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f6dd80c-3e9a-4ee6-83f8-40195165ec1c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9f6dd80c-3e9a-4ee6-83f8-40195165ec1c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:45:00 crc kubenswrapper[4789]: I1124 11:45:00.961629 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9f6dd80c-3e9a-4ee6-83f8-40195165ec1c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9f6dd80c-3e9a-4ee6-83f8-40195165ec1c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:45:01 crc kubenswrapper[4789]: I1124 11:45:01.063639 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f6dd80c-3e9a-4ee6-83f8-40195165ec1c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9f6dd80c-3e9a-4ee6-83f8-40195165ec1c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:45:01 crc kubenswrapper[4789]: I1124 11:45:01.063696 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f6dd80c-3e9a-4ee6-83f8-40195165ec1c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9f6dd80c-3e9a-4ee6-83f8-40195165ec1c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:45:01 crc kubenswrapper[4789]: I1124 11:45:01.063740 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9f6dd80c-3e9a-4ee6-83f8-40195165ec1c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9f6dd80c-3e9a-4ee6-83f8-40195165ec1c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:45:01 crc kubenswrapper[4789]: I1124 11:45:01.063793 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9f6dd80c-3e9a-4ee6-83f8-40195165ec1c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:45:01 crc kubenswrapper[4789]: I1124 11:45:01.063835 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6j72\" (UniqueName: \"kubernetes.io/projected/9f6dd80c-3e9a-4ee6-83f8-40195165ec1c-kube-api-access-g6j72\") pod \"openstack-cell1-galera-0\" (UID: \"9f6dd80c-3e9a-4ee6-83f8-40195165ec1c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:45:01 crc kubenswrapper[4789]: I1124 11:45:01.063875 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9f6dd80c-3e9a-4ee6-83f8-40195165ec1c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9f6dd80c-3e9a-4ee6-83f8-40195165ec1c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:45:01 crc kubenswrapper[4789]: I1124 11:45:01.063904 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f6dd80c-3e9a-4ee6-83f8-40195165ec1c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9f6dd80c-3e9a-4ee6-83f8-40195165ec1c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:45:01 crc kubenswrapper[4789]: I1124 11:45:01.063933 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9f6dd80c-3e9a-4ee6-83f8-40195165ec1c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9f6dd80c-3e9a-4ee6-83f8-40195165ec1c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:45:01 crc kubenswrapper[4789]: I1124 11:45:01.064756 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9f6dd80c-3e9a-4ee6-83f8-40195165ec1c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9f6dd80c-3e9a-4ee6-83f8-40195165ec1c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:45:01 crc kubenswrapper[4789]: I1124 11:45:01.065120 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9f6dd80c-3e9a-4ee6-83f8-40195165ec1c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9f6dd80c-3e9a-4ee6-83f8-40195165ec1c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:45:01 crc kubenswrapper[4789]: I1124 11:45:01.065293 4789 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9f6dd80c-3e9a-4ee6-83f8-40195165ec1c\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-cell1-galera-0" Nov 24 11:45:01 crc kubenswrapper[4789]: I1124 11:45:01.066810 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9f6dd80c-3e9a-4ee6-83f8-40195165ec1c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9f6dd80c-3e9a-4ee6-83f8-40195165ec1c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:45:01 crc kubenswrapper[4789]: I1124 11:45:01.067119 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f6dd80c-3e9a-4ee6-83f8-40195165ec1c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9f6dd80c-3e9a-4ee6-83f8-40195165ec1c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:45:01 crc kubenswrapper[4789]: I1124 11:45:01.086171 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f6dd80c-3e9a-4ee6-83f8-40195165ec1c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9f6dd80c-3e9a-4ee6-83f8-40195165ec1c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:45:01 crc kubenswrapper[4789]: I1124 11:45:01.088095 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f6dd80c-3e9a-4ee6-83f8-40195165ec1c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9f6dd80c-3e9a-4ee6-83f8-40195165ec1c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:45:01 crc kubenswrapper[4789]: I1124 11:45:01.097175 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6j72\" (UniqueName: \"kubernetes.io/projected/9f6dd80c-3e9a-4ee6-83f8-40195165ec1c-kube-api-access-g6j72\") pod \"openstack-cell1-galera-0\" (UID: \"9f6dd80c-3e9a-4ee6-83f8-40195165ec1c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:45:01 crc kubenswrapper[4789]: I1124 11:45:01.106625 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9f6dd80c-3e9a-4ee6-83f8-40195165ec1c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:45:01 crc kubenswrapper[4789]: I1124 11:45:01.198300 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 24 11:45:01 crc kubenswrapper[4789]: I1124 11:45:01.208748 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 24 11:45:01 crc kubenswrapper[4789]: I1124 11:45:01.210217 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 24 11:45:01 crc kubenswrapper[4789]: I1124 11:45:01.216855 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 24 11:45:01 crc kubenswrapper[4789]: I1124 11:45:01.217083 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 24 11:45:01 crc kubenswrapper[4789]: I1124 11:45:01.217216 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-smqrd" Nov 24 11:45:01 crc kubenswrapper[4789]: I1124 11:45:01.227001 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 24 11:45:01 crc kubenswrapper[4789]: I1124 11:45:01.267162 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2t6d\" (UniqueName: \"kubernetes.io/projected/6583a8fe-db60-4eac-8bd0-32278517eff8-kube-api-access-w2t6d\") pod \"memcached-0\" (UID: \"6583a8fe-db60-4eac-8bd0-32278517eff8\") " pod="openstack/memcached-0" Nov 24 11:45:01 crc kubenswrapper[4789]: I1124 11:45:01.267259 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6583a8fe-db60-4eac-8bd0-32278517eff8-config-data\") pod \"memcached-0\" (UID: \"6583a8fe-db60-4eac-8bd0-32278517eff8\") " pod="openstack/memcached-0" Nov 24 11:45:01 crc kubenswrapper[4789]: I1124 11:45:01.267297 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6583a8fe-db60-4eac-8bd0-32278517eff8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6583a8fe-db60-4eac-8bd0-32278517eff8\") " pod="openstack/memcached-0" Nov 24 11:45:01 crc kubenswrapper[4789]: I1124 11:45:01.267314 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6583a8fe-db60-4eac-8bd0-32278517eff8-kolla-config\") pod \"memcached-0\" (UID: \"6583a8fe-db60-4eac-8bd0-32278517eff8\") " pod="openstack/memcached-0" Nov 24 11:45:01 crc kubenswrapper[4789]: I1124 11:45:01.267331 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6583a8fe-db60-4eac-8bd0-32278517eff8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6583a8fe-db60-4eac-8bd0-32278517eff8\") " pod="openstack/memcached-0" Nov 24 11:45:01 crc kubenswrapper[4789]: I1124 11:45:01.369202 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6583a8fe-db60-4eac-8bd0-32278517eff8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6583a8fe-db60-4eac-8bd0-32278517eff8\") " pod="openstack/memcached-0" Nov 24 11:45:01 crc kubenswrapper[4789]: I1124 11:45:01.369264 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6583a8fe-db60-4eac-8bd0-32278517eff8-kolla-config\") pod \"memcached-0\" (UID: \"6583a8fe-db60-4eac-8bd0-32278517eff8\") " pod="openstack/memcached-0" Nov 24 11:45:01 crc kubenswrapper[4789]: I1124 11:45:01.369293 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6583a8fe-db60-4eac-8bd0-32278517eff8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6583a8fe-db60-4eac-8bd0-32278517eff8\") " pod="openstack/memcached-0" Nov 24 11:45:01 crc kubenswrapper[4789]: I1124 11:45:01.369362 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2t6d\" (UniqueName: \"kubernetes.io/projected/6583a8fe-db60-4eac-8bd0-32278517eff8-kube-api-access-w2t6d\") pod \"memcached-0\" (UID: \"6583a8fe-db60-4eac-8bd0-32278517eff8\") " pod="openstack/memcached-0" Nov 24 11:45:01 crc kubenswrapper[4789]: I1124 11:45:01.369442 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6583a8fe-db60-4eac-8bd0-32278517eff8-config-data\") pod \"memcached-0\" (UID: \"6583a8fe-db60-4eac-8bd0-32278517eff8\") " pod="openstack/memcached-0" Nov 24 11:45:01 crc kubenswrapper[4789]: I1124 11:45:01.370377 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6583a8fe-db60-4eac-8bd0-32278517eff8-config-data\") pod \"memcached-0\" (UID: \"6583a8fe-db60-4eac-8bd0-32278517eff8\") " pod="openstack/memcached-0" Nov 24 11:45:01 crc kubenswrapper[4789]: I1124 11:45:01.371276 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6583a8fe-db60-4eac-8bd0-32278517eff8-kolla-config\") pod \"memcached-0\" (UID: \"6583a8fe-db60-4eac-8bd0-32278517eff8\") " pod="openstack/memcached-0" Nov 24 11:45:01 crc kubenswrapper[4789]: I1124 11:45:01.377653 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6583a8fe-db60-4eac-8bd0-32278517eff8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6583a8fe-db60-4eac-8bd0-32278517eff8\") " pod="openstack/memcached-0" Nov 24 11:45:01 crc kubenswrapper[4789]: I1124 11:45:01.392988 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6583a8fe-db60-4eac-8bd0-32278517eff8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6583a8fe-db60-4eac-8bd0-32278517eff8\") " pod="openstack/memcached-0" Nov 24 11:45:01 crc kubenswrapper[4789]: I1124 11:45:01.393158 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2t6d\" (UniqueName: \"kubernetes.io/projected/6583a8fe-db60-4eac-8bd0-32278517eff8-kube-api-access-w2t6d\") pod \"memcached-0\" (UID: \"6583a8fe-db60-4eac-8bd0-32278517eff8\") " pod="openstack/memcached-0" Nov 24 11:45:01 crc kubenswrapper[4789]: I1124 11:45:01.527027 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 24 11:45:02 crc kubenswrapper[4789]: I1124 11:45:02.546952 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-d85r8" event={"ID":"12a839b1-6b99-4bc4-a4b1-40db5cd77076","Type":"ContainerStarted","Data":"b09d3da4320fe085f2351f8c1414b0082049976fdf823332556fa5c26ec49e94"} Nov 24 11:45:03 crc kubenswrapper[4789]: I1124 11:45:03.113800 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 11:45:03 crc kubenswrapper[4789]: I1124 11:45:03.116961 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 11:45:03 crc kubenswrapper[4789]: I1124 11:45:03.119830 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-kvp52" Nov 24 11:45:03 crc kubenswrapper[4789]: I1124 11:45:03.138380 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 11:45:03 crc kubenswrapper[4789]: I1124 11:45:03.200254 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkwlb\" (UniqueName: \"kubernetes.io/projected/e2c4a6c2-feeb-4afe-bfd8-9c79e65736e1-kube-api-access-dkwlb\") pod \"kube-state-metrics-0\" (UID: \"e2c4a6c2-feeb-4afe-bfd8-9c79e65736e1\") " pod="openstack/kube-state-metrics-0" Nov 24 11:45:03 crc kubenswrapper[4789]: I1124 11:45:03.301779 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkwlb\" (UniqueName: \"kubernetes.io/projected/e2c4a6c2-feeb-4afe-bfd8-9c79e65736e1-kube-api-access-dkwlb\") pod \"kube-state-metrics-0\" (UID: \"e2c4a6c2-feeb-4afe-bfd8-9c79e65736e1\") " pod="openstack/kube-state-metrics-0" Nov 24 11:45:03 crc kubenswrapper[4789]: I1124 11:45:03.329742 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkwlb\" (UniqueName: \"kubernetes.io/projected/e2c4a6c2-feeb-4afe-bfd8-9c79e65736e1-kube-api-access-dkwlb\") pod \"kube-state-metrics-0\" (UID: \"e2c4a6c2-feeb-4afe-bfd8-9c79e65736e1\") " pod="openstack/kube-state-metrics-0" Nov 24 11:45:03 crc kubenswrapper[4789]: I1124 11:45:03.434369 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 11:45:05 crc kubenswrapper[4789]: I1124 11:45:05.393440 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 11:45:05 crc kubenswrapper[4789]: I1124 11:45:05.397279 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399745-zjhkb"] Nov 24 11:45:06 crc kubenswrapper[4789]: I1124 11:45:06.935959 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-zh2n4"] Nov 24 11:45:06 crc kubenswrapper[4789]: I1124 11:45:06.940290 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zh2n4" Nov 24 11:45:06 crc kubenswrapper[4789]: I1124 11:45:06.992761 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 24 11:45:06 crc kubenswrapper[4789]: I1124 11:45:06.993619 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 24 11:45:06 crc kubenswrapper[4789]: I1124 11:45:06.993687 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-klzlx" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.033168 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zh2n4"] Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.053967 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-4tbr6"] Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.055362 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-4tbr6" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.069257 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-4tbr6"] Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.094662 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c77484cd-66ed-4471-9136-5e44eadd28ad-scripts\") pod \"ovn-controller-zh2n4\" (UID: \"c77484cd-66ed-4471-9136-5e44eadd28ad\") " pod="openstack/ovn-controller-zh2n4" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.094706 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c77484cd-66ed-4471-9136-5e44eadd28ad-var-run\") pod \"ovn-controller-zh2n4\" (UID: \"c77484cd-66ed-4471-9136-5e44eadd28ad\") " pod="openstack/ovn-controller-zh2n4" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.094723 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c77484cd-66ed-4471-9136-5e44eadd28ad-var-run-ovn\") pod \"ovn-controller-zh2n4\" (UID: \"c77484cd-66ed-4471-9136-5e44eadd28ad\") " pod="openstack/ovn-controller-zh2n4" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.094741 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c77484cd-66ed-4471-9136-5e44eadd28ad-combined-ca-bundle\") pod \"ovn-controller-zh2n4\" (UID: \"c77484cd-66ed-4471-9136-5e44eadd28ad\") " pod="openstack/ovn-controller-zh2n4" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.094763 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zbtd\" (UniqueName: \"kubernetes.io/projected/c77484cd-66ed-4471-9136-5e44eadd28ad-kube-api-access-2zbtd\") pod \"ovn-controller-zh2n4\" (UID: \"c77484cd-66ed-4471-9136-5e44eadd28ad\") " pod="openstack/ovn-controller-zh2n4" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.094782 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c77484cd-66ed-4471-9136-5e44eadd28ad-ovn-controller-tls-certs\") pod \"ovn-controller-zh2n4\" (UID: \"c77484cd-66ed-4471-9136-5e44eadd28ad\") " pod="openstack/ovn-controller-zh2n4" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.094840 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c77484cd-66ed-4471-9136-5e44eadd28ad-var-log-ovn\") pod \"ovn-controller-zh2n4\" (UID: \"c77484cd-66ed-4471-9136-5e44eadd28ad\") " pod="openstack/ovn-controller-zh2n4" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.195936 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/315d6386-62b1-4775-8185-2814e6b91bf5-scripts\") pod \"ovn-controller-ovs-4tbr6\" (UID: \"315d6386-62b1-4775-8185-2814e6b91bf5\") " pod="openstack/ovn-controller-ovs-4tbr6" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.196064 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c77484cd-66ed-4471-9136-5e44eadd28ad-scripts\") pod \"ovn-controller-zh2n4\" (UID: \"c77484cd-66ed-4471-9136-5e44eadd28ad\") " pod="openstack/ovn-controller-zh2n4" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.196086 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c77484cd-66ed-4471-9136-5e44eadd28ad-var-run\") pod \"ovn-controller-zh2n4\" (UID: \"c77484cd-66ed-4471-9136-5e44eadd28ad\") " pod="openstack/ovn-controller-zh2n4" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.196101 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c77484cd-66ed-4471-9136-5e44eadd28ad-var-run-ovn\") pod \"ovn-controller-zh2n4\" (UID: \"c77484cd-66ed-4471-9136-5e44eadd28ad\") " pod="openstack/ovn-controller-zh2n4" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.196118 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/315d6386-62b1-4775-8185-2814e6b91bf5-var-log\") pod \"ovn-controller-ovs-4tbr6\" (UID: \"315d6386-62b1-4775-8185-2814e6b91bf5\") " pod="openstack/ovn-controller-ovs-4tbr6" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.196556 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c77484cd-66ed-4471-9136-5e44eadd28ad-var-run\") pod \"ovn-controller-zh2n4\" (UID: \"c77484cd-66ed-4471-9136-5e44eadd28ad\") " pod="openstack/ovn-controller-zh2n4" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.196653 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c77484cd-66ed-4471-9136-5e44eadd28ad-var-run-ovn\") pod \"ovn-controller-zh2n4\" (UID: \"c77484cd-66ed-4471-9136-5e44eadd28ad\") " pod="openstack/ovn-controller-zh2n4" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.198301 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c77484cd-66ed-4471-9136-5e44eadd28ad-scripts\") pod \"ovn-controller-zh2n4\" (UID: \"c77484cd-66ed-4471-9136-5e44eadd28ad\") " pod="openstack/ovn-controller-zh2n4" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.198346 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c77484cd-66ed-4471-9136-5e44eadd28ad-combined-ca-bundle\") pod \"ovn-controller-zh2n4\" (UID: \"c77484cd-66ed-4471-9136-5e44eadd28ad\") " pod="openstack/ovn-controller-zh2n4" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.198370 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/315d6386-62b1-4775-8185-2814e6b91bf5-etc-ovs\") pod \"ovn-controller-ovs-4tbr6\" (UID: \"315d6386-62b1-4775-8185-2814e6b91bf5\") " pod="openstack/ovn-controller-ovs-4tbr6" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.198391 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zbtd\" (UniqueName: \"kubernetes.io/projected/c77484cd-66ed-4471-9136-5e44eadd28ad-kube-api-access-2zbtd\") pod \"ovn-controller-zh2n4\" (UID: \"c77484cd-66ed-4471-9136-5e44eadd28ad\") " pod="openstack/ovn-controller-zh2n4" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.198409 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c77484cd-66ed-4471-9136-5e44eadd28ad-ovn-controller-tls-certs\") pod \"ovn-controller-zh2n4\" (UID: \"c77484cd-66ed-4471-9136-5e44eadd28ad\") " pod="openstack/ovn-controller-zh2n4" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.198439 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpv9d\" (UniqueName: \"kubernetes.io/projected/315d6386-62b1-4775-8185-2814e6b91bf5-kube-api-access-dpv9d\") pod \"ovn-controller-ovs-4tbr6\" (UID: \"315d6386-62b1-4775-8185-2814e6b91bf5\") " pod="openstack/ovn-controller-ovs-4tbr6" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.198469 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/315d6386-62b1-4775-8185-2814e6b91bf5-var-run\") pod \"ovn-controller-ovs-4tbr6\" (UID: \"315d6386-62b1-4775-8185-2814e6b91bf5\") " pod="openstack/ovn-controller-ovs-4tbr6" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.198546 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/315d6386-62b1-4775-8185-2814e6b91bf5-var-lib\") pod \"ovn-controller-ovs-4tbr6\" (UID: \"315d6386-62b1-4775-8185-2814e6b91bf5\") " pod="openstack/ovn-controller-ovs-4tbr6" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.198565 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c77484cd-66ed-4471-9136-5e44eadd28ad-var-log-ovn\") pod \"ovn-controller-zh2n4\" (UID: \"c77484cd-66ed-4471-9136-5e44eadd28ad\") " pod="openstack/ovn-controller-zh2n4" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.198728 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c77484cd-66ed-4471-9136-5e44eadd28ad-var-log-ovn\") pod \"ovn-controller-zh2n4\" (UID: \"c77484cd-66ed-4471-9136-5e44eadd28ad\") " pod="openstack/ovn-controller-zh2n4" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.204270 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c77484cd-66ed-4471-9136-5e44eadd28ad-ovn-controller-tls-certs\") pod \"ovn-controller-zh2n4\" (UID: \"c77484cd-66ed-4471-9136-5e44eadd28ad\") " pod="openstack/ovn-controller-zh2n4" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.219784 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c77484cd-66ed-4471-9136-5e44eadd28ad-combined-ca-bundle\") pod \"ovn-controller-zh2n4\" (UID: \"c77484cd-66ed-4471-9136-5e44eadd28ad\") " pod="openstack/ovn-controller-zh2n4" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.225617 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zbtd\" (UniqueName: \"kubernetes.io/projected/c77484cd-66ed-4471-9136-5e44eadd28ad-kube-api-access-2zbtd\") pod \"ovn-controller-zh2n4\" (UID: \"c77484cd-66ed-4471-9136-5e44eadd28ad\") " pod="openstack/ovn-controller-zh2n4" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.299468 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/315d6386-62b1-4775-8185-2814e6b91bf5-var-log\") pod \"ovn-controller-ovs-4tbr6\" (UID: \"315d6386-62b1-4775-8185-2814e6b91bf5\") " pod="openstack/ovn-controller-ovs-4tbr6" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.299524 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/315d6386-62b1-4775-8185-2814e6b91bf5-etc-ovs\") pod \"ovn-controller-ovs-4tbr6\" (UID: \"315d6386-62b1-4775-8185-2814e6b91bf5\") " pod="openstack/ovn-controller-ovs-4tbr6" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.299561 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpv9d\" (UniqueName: \"kubernetes.io/projected/315d6386-62b1-4775-8185-2814e6b91bf5-kube-api-access-dpv9d\") pod \"ovn-controller-ovs-4tbr6\" (UID: \"315d6386-62b1-4775-8185-2814e6b91bf5\") " pod="openstack/ovn-controller-ovs-4tbr6" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.299608 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/315d6386-62b1-4775-8185-2814e6b91bf5-var-run\") pod \"ovn-controller-ovs-4tbr6\" (UID: \"315d6386-62b1-4775-8185-2814e6b91bf5\") " pod="openstack/ovn-controller-ovs-4tbr6" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.299634 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/315d6386-62b1-4775-8185-2814e6b91bf5-var-lib\") pod \"ovn-controller-ovs-4tbr6\" (UID: \"315d6386-62b1-4775-8185-2814e6b91bf5\") " pod="openstack/ovn-controller-ovs-4tbr6" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.299659 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/315d6386-62b1-4775-8185-2814e6b91bf5-scripts\") pod \"ovn-controller-ovs-4tbr6\" (UID: \"315d6386-62b1-4775-8185-2814e6b91bf5\") " pod="openstack/ovn-controller-ovs-4tbr6" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.300137 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/315d6386-62b1-4775-8185-2814e6b91bf5-var-log\") pod \"ovn-controller-ovs-4tbr6\" (UID: \"315d6386-62b1-4775-8185-2814e6b91bf5\") " pod="openstack/ovn-controller-ovs-4tbr6" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.300245 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/315d6386-62b1-4775-8185-2814e6b91bf5-var-run\") pod \"ovn-controller-ovs-4tbr6\" (UID: \"315d6386-62b1-4775-8185-2814e6b91bf5\") " pod="openstack/ovn-controller-ovs-4tbr6" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.300538 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/315d6386-62b1-4775-8185-2814e6b91bf5-var-lib\") pod \"ovn-controller-ovs-4tbr6\" (UID: \"315d6386-62b1-4775-8185-2814e6b91bf5\") " pod="openstack/ovn-controller-ovs-4tbr6" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.300642 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/315d6386-62b1-4775-8185-2814e6b91bf5-etc-ovs\") pod \"ovn-controller-ovs-4tbr6\" (UID: \"315d6386-62b1-4775-8185-2814e6b91bf5\") " pod="openstack/ovn-controller-ovs-4tbr6" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.303984 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/315d6386-62b1-4775-8185-2814e6b91bf5-scripts\") pod \"ovn-controller-ovs-4tbr6\" (UID: \"315d6386-62b1-4775-8185-2814e6b91bf5\") " pod="openstack/ovn-controller-ovs-4tbr6" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.329312 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zh2n4" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.334050 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpv9d\" (UniqueName: \"kubernetes.io/projected/315d6386-62b1-4775-8185-2814e6b91bf5-kube-api-access-dpv9d\") pod \"ovn-controller-ovs-4tbr6\" (UID: \"315d6386-62b1-4775-8185-2814e6b91bf5\") " pod="openstack/ovn-controller-ovs-4tbr6" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.379709 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-4tbr6" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.797011 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.798770 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.802358 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.802679 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.802852 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-8hd6j" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.802996 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.803142 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.837593 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.920173 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a18067c-f6d5-4650-897e-ec8e249b0e8b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9a18067c-f6d5-4650-897e-ec8e249b0e8b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.920212 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q4kv\" (UniqueName: \"kubernetes.io/projected/9a18067c-f6d5-4650-897e-ec8e249b0e8b-kube-api-access-7q4kv\") pod \"ovsdbserver-nb-0\" (UID: \"9a18067c-f6d5-4650-897e-ec8e249b0e8b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.920275 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a18067c-f6d5-4650-897e-ec8e249b0e8b-config\") pod \"ovsdbserver-nb-0\" (UID: \"9a18067c-f6d5-4650-897e-ec8e249b0e8b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.920429 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a18067c-f6d5-4650-897e-ec8e249b0e8b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9a18067c-f6d5-4650-897e-ec8e249b0e8b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.920488 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a18067c-f6d5-4650-897e-ec8e249b0e8b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"9a18067c-f6d5-4650-897e-ec8e249b0e8b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.920520 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a18067c-f6d5-4650-897e-ec8e249b0e8b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"9a18067c-f6d5-4650-897e-ec8e249b0e8b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.920601 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"9a18067c-f6d5-4650-897e-ec8e249b0e8b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:45:07 crc kubenswrapper[4789]: I1124 11:45:07.920631 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9a18067c-f6d5-4650-897e-ec8e249b0e8b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"9a18067c-f6d5-4650-897e-ec8e249b0e8b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:45:08 crc kubenswrapper[4789]: I1124 11:45:08.022395 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q4kv\" (UniqueName: \"kubernetes.io/projected/9a18067c-f6d5-4650-897e-ec8e249b0e8b-kube-api-access-7q4kv\") pod \"ovsdbserver-nb-0\" (UID: \"9a18067c-f6d5-4650-897e-ec8e249b0e8b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:45:08 crc kubenswrapper[4789]: I1124 11:45:08.022524 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a18067c-f6d5-4650-897e-ec8e249b0e8b-config\") pod \"ovsdbserver-nb-0\" (UID: \"9a18067c-f6d5-4650-897e-ec8e249b0e8b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:45:08 crc kubenswrapper[4789]: I1124 11:45:08.022561 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a18067c-f6d5-4650-897e-ec8e249b0e8b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9a18067c-f6d5-4650-897e-ec8e249b0e8b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:45:08 crc kubenswrapper[4789]: I1124 11:45:08.022579 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a18067c-f6d5-4650-897e-ec8e249b0e8b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"9a18067c-f6d5-4650-897e-ec8e249b0e8b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:45:08 crc kubenswrapper[4789]: I1124 11:45:08.022604 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a18067c-f6d5-4650-897e-ec8e249b0e8b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"9a18067c-f6d5-4650-897e-ec8e249b0e8b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:45:08 crc kubenswrapper[4789]: I1124 11:45:08.022645 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"9a18067c-f6d5-4650-897e-ec8e249b0e8b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:45:08 crc kubenswrapper[4789]: I1124 11:45:08.022670 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9a18067c-f6d5-4650-897e-ec8e249b0e8b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"9a18067c-f6d5-4650-897e-ec8e249b0e8b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:45:08 crc kubenswrapper[4789]: I1124 11:45:08.022697 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a18067c-f6d5-4650-897e-ec8e249b0e8b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9a18067c-f6d5-4650-897e-ec8e249b0e8b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:45:08 crc kubenswrapper[4789]: I1124 11:45:08.025103 4789 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"9a18067c-f6d5-4650-897e-ec8e249b0e8b\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-nb-0" Nov 24 11:45:08 crc kubenswrapper[4789]: I1124 11:45:08.025322 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9a18067c-f6d5-4650-897e-ec8e249b0e8b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"9a18067c-f6d5-4650-897e-ec8e249b0e8b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:45:08 crc kubenswrapper[4789]: I1124 11:45:08.025685 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a18067c-f6d5-4650-897e-ec8e249b0e8b-config\") pod \"ovsdbserver-nb-0\" (UID: \"9a18067c-f6d5-4650-897e-ec8e249b0e8b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:45:08 crc kubenswrapper[4789]: I1124 11:45:08.027302 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a18067c-f6d5-4650-897e-ec8e249b0e8b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"9a18067c-f6d5-4650-897e-ec8e249b0e8b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:45:08 crc kubenswrapper[4789]: I1124 11:45:08.028905 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a18067c-f6d5-4650-897e-ec8e249b0e8b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9a18067c-f6d5-4650-897e-ec8e249b0e8b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:45:08 crc kubenswrapper[4789]: I1124 11:45:08.029511 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a18067c-f6d5-4650-897e-ec8e249b0e8b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"9a18067c-f6d5-4650-897e-ec8e249b0e8b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:45:08 crc kubenswrapper[4789]: I1124 11:45:08.039962 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a18067c-f6d5-4650-897e-ec8e249b0e8b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9a18067c-f6d5-4650-897e-ec8e249b0e8b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:45:08 crc kubenswrapper[4789]: I1124 11:45:08.047559 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q4kv\" (UniqueName: \"kubernetes.io/projected/9a18067c-f6d5-4650-897e-ec8e249b0e8b-kube-api-access-7q4kv\") pod \"ovsdbserver-nb-0\" (UID: \"9a18067c-f6d5-4650-897e-ec8e249b0e8b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:45:08 crc kubenswrapper[4789]: I1124 11:45:08.049169 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"9a18067c-f6d5-4650-897e-ec8e249b0e8b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:45:08 crc kubenswrapper[4789]: I1124 11:45:08.124879 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 24 11:45:09 crc kubenswrapper[4789]: I1124 11:45:09.664336 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 11:45:09 crc kubenswrapper[4789]: I1124 11:45:09.667912 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 24 11:45:09 crc kubenswrapper[4789]: I1124 11:45:09.669987 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 24 11:45:09 crc kubenswrapper[4789]: I1124 11:45:09.670212 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 24 11:45:09 crc kubenswrapper[4789]: I1124 11:45:09.670419 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 24 11:45:09 crc kubenswrapper[4789]: I1124 11:45:09.670904 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-f9rwh" Nov 24 11:45:09 crc kubenswrapper[4789]: I1124 11:45:09.678687 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 11:45:09 crc kubenswrapper[4789]: I1124 11:45:09.755686 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77772f5a-c498-46a2-861c-8145c554f262-config\") pod \"ovsdbserver-sb-0\" (UID: \"77772f5a-c498-46a2-861c-8145c554f262\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:45:09 crc kubenswrapper[4789]: I1124 11:45:09.755745 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/77772f5a-c498-46a2-861c-8145c554f262-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"77772f5a-c498-46a2-861c-8145c554f262\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:45:09 crc kubenswrapper[4789]: I1124 11:45:09.755775 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77772f5a-c498-46a2-861c-8145c554f262-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"77772f5a-c498-46a2-861c-8145c554f262\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:45:09 crc kubenswrapper[4789]: I1124 11:45:09.755815 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bktjk\" (UniqueName: \"kubernetes.io/projected/77772f5a-c498-46a2-861c-8145c554f262-kube-api-access-bktjk\") pod \"ovsdbserver-sb-0\" (UID: \"77772f5a-c498-46a2-861c-8145c554f262\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:45:09 crc kubenswrapper[4789]: I1124 11:45:09.755840 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/77772f5a-c498-46a2-861c-8145c554f262-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"77772f5a-c498-46a2-861c-8145c554f262\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:45:09 crc kubenswrapper[4789]: I1124 11:45:09.755907 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"77772f5a-c498-46a2-861c-8145c554f262\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:45:09 crc kubenswrapper[4789]: I1124 11:45:09.756022 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/77772f5a-c498-46a2-861c-8145c554f262-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"77772f5a-c498-46a2-861c-8145c554f262\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:45:09 crc kubenswrapper[4789]: I1124 11:45:09.756038 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/77772f5a-c498-46a2-861c-8145c554f262-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"77772f5a-c498-46a2-861c-8145c554f262\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:45:09 crc kubenswrapper[4789]: I1124 11:45:09.863796 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77772f5a-c498-46a2-861c-8145c554f262-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"77772f5a-c498-46a2-861c-8145c554f262\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:45:09 crc kubenswrapper[4789]: I1124 11:45:09.863965 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bktjk\" (UniqueName: \"kubernetes.io/projected/77772f5a-c498-46a2-861c-8145c554f262-kube-api-access-bktjk\") pod \"ovsdbserver-sb-0\" (UID: \"77772f5a-c498-46a2-861c-8145c554f262\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:45:09 crc kubenswrapper[4789]: I1124 11:45:09.864028 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/77772f5a-c498-46a2-861c-8145c554f262-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"77772f5a-c498-46a2-861c-8145c554f262\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:45:09 crc kubenswrapper[4789]: I1124 11:45:09.864156 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"77772f5a-c498-46a2-861c-8145c554f262\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:45:09 crc kubenswrapper[4789]: I1124 11:45:09.864233 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/77772f5a-c498-46a2-861c-8145c554f262-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"77772f5a-c498-46a2-861c-8145c554f262\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:45:09 crc kubenswrapper[4789]: I1124 11:45:09.864276 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/77772f5a-c498-46a2-861c-8145c554f262-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"77772f5a-c498-46a2-861c-8145c554f262\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:45:09 crc kubenswrapper[4789]: I1124 11:45:09.864365 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77772f5a-c498-46a2-861c-8145c554f262-config\") pod \"ovsdbserver-sb-0\" (UID: \"77772f5a-c498-46a2-861c-8145c554f262\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:45:09 crc kubenswrapper[4789]: I1124 11:45:09.864415 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/77772f5a-c498-46a2-861c-8145c554f262-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"77772f5a-c498-46a2-861c-8145c554f262\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:45:09 crc kubenswrapper[4789]: I1124 11:45:09.870591 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/77772f5a-c498-46a2-861c-8145c554f262-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"77772f5a-c498-46a2-861c-8145c554f262\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:45:09 crc kubenswrapper[4789]: I1124 11:45:09.873389 4789 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"77772f5a-c498-46a2-861c-8145c554f262\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-sb-0" Nov 24 11:45:09 crc kubenswrapper[4789]: I1124 11:45:09.874784 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77772f5a-c498-46a2-861c-8145c554f262-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"77772f5a-c498-46a2-861c-8145c554f262\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:45:09 crc kubenswrapper[4789]: I1124 11:45:09.884082 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/77772f5a-c498-46a2-861c-8145c554f262-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"77772f5a-c498-46a2-861c-8145c554f262\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:45:09 crc kubenswrapper[4789]: I1124 11:45:09.884915 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77772f5a-c498-46a2-861c-8145c554f262-config\") pod \"ovsdbserver-sb-0\" (UID: \"77772f5a-c498-46a2-861c-8145c554f262\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:45:09 crc kubenswrapper[4789]: I1124 11:45:09.885954 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/77772f5a-c498-46a2-861c-8145c554f262-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"77772f5a-c498-46a2-861c-8145c554f262\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:45:09 crc kubenswrapper[4789]: I1124 11:45:09.892619 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/77772f5a-c498-46a2-861c-8145c554f262-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"77772f5a-c498-46a2-861c-8145c554f262\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:45:09 crc kubenswrapper[4789]: I1124 11:45:09.911768 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"77772f5a-c498-46a2-861c-8145c554f262\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:45:09 crc kubenswrapper[4789]: I1124 11:45:09.913728 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bktjk\" (UniqueName: \"kubernetes.io/projected/77772f5a-c498-46a2-861c-8145c554f262-kube-api-access-bktjk\") pod \"ovsdbserver-sb-0\" (UID: \"77772f5a-c498-46a2-861c-8145c554f262\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:45:09 crc kubenswrapper[4789]: I1124 11:45:09.993237 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 24 11:45:10 crc kubenswrapper[4789]: I1124 11:45:10.201137 4789 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 11:45:10 crc kubenswrapper[4789]: W1124 11:45:10.208102 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddec57c49_8f33_4945_902f_bc30c4f577a7.slice/crio-c3ac9335b3a04e4e2bb8bc5a9bc66e7fa797c8e456e4654717b19e39e55c9f92 WatchSource:0}: Error finding container c3ac9335b3a04e4e2bb8bc5a9bc66e7fa797c8e456e4654717b19e39e55c9f92: Status 404 returned error can't find the container with id c3ac9335b3a04e4e2bb8bc5a9bc66e7fa797c8e456e4654717b19e39e55c9f92 Nov 24 11:45:10 crc kubenswrapper[4789]: I1124 11:45:10.613068 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 11:45:10 crc kubenswrapper[4789]: I1124 11:45:10.615632 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e","Type":"ContainerStarted","Data":"09ac90e8d2dc8174a64b28a962173151214ecc828c9103ef208179ca108e1bc3"} Nov 24 11:45:10 crc kubenswrapper[4789]: I1124 11:45:10.617897 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-zjhkb" event={"ID":"dec57c49-8f33-4945-902f-bc30c4f577a7","Type":"ContainerStarted","Data":"c3ac9335b3a04e4e2bb8bc5a9bc66e7fa797c8e456e4654717b19e39e55c9f92"} Nov 24 11:45:10 crc kubenswrapper[4789]: I1124 11:45:10.660862 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2xtcq"] Nov 24 11:45:11 crc kubenswrapper[4789]: W1124 11:45:11.068700 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda67a3b5d_1c99_4caa_8d70_f65c7b1926a1.slice/crio-2f4186439f7ab9d2e93f427226a41e18fd30ae679f7417d76aaa0061bb5cf4a8 WatchSource:0}: Error finding container 2f4186439f7ab9d2e93f427226a41e18fd30ae679f7417d76aaa0061bb5cf4a8: Status 404 returned error can't find the container with id 2f4186439f7ab9d2e93f427226a41e18fd30ae679f7417d76aaa0061bb5cf4a8 Nov 24 11:45:11 crc kubenswrapper[4789]: W1124 11:45:11.084338 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad2c0f97_8696_425d_bd5a_42a24bee8297.slice/crio-cd9e980668f226cae8a221617ea2d9f60230ac680ef31ad8bb430d7191f0a444 WatchSource:0}: Error finding container cd9e980668f226cae8a221617ea2d9f60230ac680ef31ad8bb430d7191f0a444: Status 404 returned error can't find the container with id cd9e980668f226cae8a221617ea2d9f60230ac680ef31ad8bb430d7191f0a444 Nov 24 11:45:11 crc kubenswrapper[4789]: E1124 11:45:11.122316 4789 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 24 11:45:11 crc kubenswrapper[4789]: E1124 11:45:11.122529 4789 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mxzvp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-87j46_openstack(1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:45:11 crc kubenswrapper[4789]: E1124 11:45:11.123960 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-87j46" podUID="1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2" Nov 24 11:45:11 crc kubenswrapper[4789]: E1124 11:45:11.265310 4789 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 24 11:45:11 crc kubenswrapper[4789]: E1124 11:45:11.265874 4789 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bz5kh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-6gndw_openstack(fd2927f8-d0f3-444c-8d8a-51d76f298b85): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:45:11 crc kubenswrapper[4789]: E1124 11:45:11.267610 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-6gndw" podUID="fd2927f8-d0f3-444c-8d8a-51d76f298b85" Nov 24 11:45:11 crc kubenswrapper[4789]: I1124 11:45:11.588688 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 24 11:45:11 crc kubenswrapper[4789]: I1124 11:45:11.605863 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 11:45:11 crc kubenswrapper[4789]: W1124 11:45:11.607013 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6583a8fe_db60_4eac_8bd0_32278517eff8.slice/crio-536d24d16b3d8f97a0c6bd69cc982fe24adeb91e16044da0547f5c282e365a2f WatchSource:0}: Error finding container 536d24d16b3d8f97a0c6bd69cc982fe24adeb91e16044da0547f5c282e365a2f: Status 404 returned error can't find the container with id 536d24d16b3d8f97a0c6bd69cc982fe24adeb91e16044da0547f5c282e365a2f Nov 24 11:45:11 crc kubenswrapper[4789]: I1124 11:45:11.642961 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-zjhkb" event={"ID":"dec57c49-8f33-4945-902f-bc30c4f577a7","Type":"ContainerStarted","Data":"7c55f61789c0450b088fa46be2733db3eb46146647fd2f6b1653a1cc5424e8de"} Nov 24 11:45:11 crc kubenswrapper[4789]: I1124 11:45:11.644833 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9f6dd80c-3e9a-4ee6-83f8-40195165ec1c","Type":"ContainerStarted","Data":"9e87ef54e9cddf7a5adeb25b6b22a93a49f4d28b57ad5e30b51444b9aca6adce"} Nov 24 11:45:11 crc kubenswrapper[4789]: I1124 11:45:11.645878 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"6583a8fe-db60-4eac-8bd0-32278517eff8","Type":"ContainerStarted","Data":"536d24d16b3d8f97a0c6bd69cc982fe24adeb91e16044da0547f5c282e365a2f"} Nov 24 11:45:11 crc kubenswrapper[4789]: I1124 11:45:11.651825 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-d85r8" event={"ID":"12a839b1-6b99-4bc4-a4b1-40db5cd77076","Type":"ContainerStarted","Data":"5400713c14d9fa7e22d8ac3288875cc925a1aeafc2d320b871581697dcdfa72b"} Nov 24 11:45:11 crc kubenswrapper[4789]: I1124 11:45:11.653532 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-2xtcq" event={"ID":"a67a3b5d-1c99-4caa-8d70-f65c7b1926a1","Type":"ContainerStarted","Data":"2f4186439f7ab9d2e93f427226a41e18fd30ae679f7417d76aaa0061bb5cf4a8"} Nov 24 11:45:11 crc kubenswrapper[4789]: I1124 11:45:11.662987 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-zjhkb" podStartSLOduration=11.662973687000001 podStartE2EDuration="11.662973687s" podCreationTimestamp="2025-11-24 11:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:45:11.659695285 +0000 UTC m=+894.242166664" watchObservedRunningTime="2025-11-24 11:45:11.662973687 +0000 UTC m=+894.245445066" Nov 24 11:45:11 crc kubenswrapper[4789]: I1124 11:45:11.666023 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ad2c0f97-8696-425d-bd5a-42a24bee8297","Type":"ContainerStarted","Data":"cd9e980668f226cae8a221617ea2d9f60230ac680ef31ad8bb430d7191f0a444"} Nov 24 11:45:11 crc kubenswrapper[4789]: I1124 11:45:11.713620 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 24 11:45:11 crc kubenswrapper[4789]: I1124 11:45:11.808857 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zh2n4"] Nov 24 11:45:11 crc kubenswrapper[4789]: I1124 11:45:11.919392 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 11:45:11 crc kubenswrapper[4789]: E1124 11:45:11.941889 4789 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Nov 24 11:45:11 crc kubenswrapper[4789]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/12a839b1-6b99-4bc4-a4b1-40db5cd77076/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Nov 24 11:45:11 crc kubenswrapper[4789]: > podSandboxID="b09d3da4320fe085f2351f8c1414b0082049976fdf823332556fa5c26ec49e94" Nov 24 11:45:11 crc kubenswrapper[4789]: E1124 11:45:11.942360 4789 kuberuntime_manager.go:1274] "Unhandled Error" err=< Nov 24 11:45:11 crc kubenswrapper[4789]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j58x5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-d85r8_openstack(12a839b1-6b99-4bc4-a4b1-40db5cd77076): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/12a839b1-6b99-4bc4-a4b1-40db5cd77076/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Nov 24 11:45:11 crc kubenswrapper[4789]: > logger="UnhandledError" Nov 24 11:45:11 crc kubenswrapper[4789]: E1124 11:45:11.943640 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/12a839b1-6b99-4bc4-a4b1-40db5cd77076/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-666b6646f7-d85r8" podUID="12a839b1-6b99-4bc4-a4b1-40db5cd77076" Nov 24 11:45:11 crc kubenswrapper[4789]: W1124 11:45:11.983360 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2c4a6c2_feeb_4afe_bfd8_9c79e65736e1.slice/crio-94869572205dae659d2f98d4e2b86acae8ea33c319393b41f194be7051a21d21 WatchSource:0}: Error finding container 94869572205dae659d2f98d4e2b86acae8ea33c319393b41f194be7051a21d21: Status 404 returned error can't find the container with id 94869572205dae659d2f98d4e2b86acae8ea33c319393b41f194be7051a21d21 Nov 24 11:45:12 crc kubenswrapper[4789]: I1124 11:45:12.233865 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-6gndw" Nov 24 11:45:12 crc kubenswrapper[4789]: I1124 11:45:12.244423 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 11:45:12 crc kubenswrapper[4789]: I1124 11:45:12.247922 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-87j46" Nov 24 11:45:12 crc kubenswrapper[4789]: I1124 11:45:12.313542 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-4tbr6"] Nov 24 11:45:12 crc kubenswrapper[4789]: I1124 11:45:12.322067 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd2927f8-d0f3-444c-8d8a-51d76f298b85-config\") pod \"fd2927f8-d0f3-444c-8d8a-51d76f298b85\" (UID: \"fd2927f8-d0f3-444c-8d8a-51d76f298b85\") " Nov 24 11:45:12 crc kubenswrapper[4789]: I1124 11:45:12.322139 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bz5kh\" (UniqueName: \"kubernetes.io/projected/fd2927f8-d0f3-444c-8d8a-51d76f298b85-kube-api-access-bz5kh\") pod \"fd2927f8-d0f3-444c-8d8a-51d76f298b85\" (UID: \"fd2927f8-d0f3-444c-8d8a-51d76f298b85\") " Nov 24 11:45:12 crc kubenswrapper[4789]: I1124 11:45:12.322188 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2-dns-svc\") pod \"1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2\" (UID: \"1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2\") " Nov 24 11:45:12 crc kubenswrapper[4789]: I1124 11:45:12.322224 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxzvp\" (UniqueName: \"kubernetes.io/projected/1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2-kube-api-access-mxzvp\") pod \"1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2\" (UID: \"1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2\") " Nov 24 11:45:12 crc kubenswrapper[4789]: I1124 11:45:12.322367 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2-config\") pod \"1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2\" (UID: \"1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2\") " Nov 24 11:45:12 crc kubenswrapper[4789]: I1124 11:45:12.322884 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2-config" (OuterVolumeSpecName: "config") pod "1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2" (UID: "1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:45:12 crc kubenswrapper[4789]: I1124 11:45:12.322922 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2" (UID: "1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:45:12 crc kubenswrapper[4789]: I1124 11:45:12.323564 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd2927f8-d0f3-444c-8d8a-51d76f298b85-config" (OuterVolumeSpecName: "config") pod "fd2927f8-d0f3-444c-8d8a-51d76f298b85" (UID: "fd2927f8-d0f3-444c-8d8a-51d76f298b85"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:45:12 crc kubenswrapper[4789]: I1124 11:45:12.327738 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2-kube-api-access-mxzvp" (OuterVolumeSpecName: "kube-api-access-mxzvp") pod "1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2" (UID: "1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2"). InnerVolumeSpecName "kube-api-access-mxzvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:45:12 crc kubenswrapper[4789]: I1124 11:45:12.330235 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd2927f8-d0f3-444c-8d8a-51d76f298b85-kube-api-access-bz5kh" (OuterVolumeSpecName: "kube-api-access-bz5kh") pod "fd2927f8-d0f3-444c-8d8a-51d76f298b85" (UID: "fd2927f8-d0f3-444c-8d8a-51d76f298b85"). InnerVolumeSpecName "kube-api-access-bz5kh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:45:12 crc kubenswrapper[4789]: W1124 11:45:12.337684 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod315d6386_62b1_4775_8185_2814e6b91bf5.slice/crio-12aa46e847fc8fa0a88e776a71957604f8f07a21919d95a865af6aec72368765 WatchSource:0}: Error finding container 12aa46e847fc8fa0a88e776a71957604f8f07a21919d95a865af6aec72368765: Status 404 returned error can't find the container with id 12aa46e847fc8fa0a88e776a71957604f8f07a21919d95a865af6aec72368765 Nov 24 11:45:12 crc kubenswrapper[4789]: I1124 11:45:12.425231 4789 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:12 crc kubenswrapper[4789]: I1124 11:45:12.425279 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxzvp\" (UniqueName: \"kubernetes.io/projected/1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2-kube-api-access-mxzvp\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:12 crc kubenswrapper[4789]: I1124 11:45:12.425295 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:12 crc kubenswrapper[4789]: I1124 11:45:12.425307 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd2927f8-d0f3-444c-8d8a-51d76f298b85-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:12 crc kubenswrapper[4789]: I1124 11:45:12.425319 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bz5kh\" (UniqueName: \"kubernetes.io/projected/fd2927f8-d0f3-444c-8d8a-51d76f298b85-kube-api-access-bz5kh\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:12 crc kubenswrapper[4789]: I1124 11:45:12.675987 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zh2n4" event={"ID":"c77484cd-66ed-4471-9136-5e44eadd28ad","Type":"ContainerStarted","Data":"d68943872815a5ab3f978947ddf8a4b35101dae1389e5064d7d56cb6696a6325"} Nov 24 11:45:12 crc kubenswrapper[4789]: I1124 11:45:12.678040 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4tbr6" event={"ID":"315d6386-62b1-4775-8185-2814e6b91bf5","Type":"ContainerStarted","Data":"12aa46e847fc8fa0a88e776a71957604f8f07a21919d95a865af6aec72368765"} Nov 24 11:45:12 crc kubenswrapper[4789]: I1124 11:45:12.679419 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-6gndw" event={"ID":"fd2927f8-d0f3-444c-8d8a-51d76f298b85","Type":"ContainerDied","Data":"659bb7355d22c65922af7b900d841434a553404b82813e3f226fbdaa32790d4d"} Nov 24 11:45:12 crc kubenswrapper[4789]: I1124 11:45:12.679501 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-6gndw" Nov 24 11:45:12 crc kubenswrapper[4789]: I1124 11:45:12.684583 4789 generic.go:334] "Generic (PLEG): container finished" podID="12a839b1-6b99-4bc4-a4b1-40db5cd77076" containerID="5400713c14d9fa7e22d8ac3288875cc925a1aeafc2d320b871581697dcdfa72b" exitCode=0 Nov 24 11:45:12 crc kubenswrapper[4789]: I1124 11:45:12.684661 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-d85r8" event={"ID":"12a839b1-6b99-4bc4-a4b1-40db5cd77076","Type":"ContainerDied","Data":"5400713c14d9fa7e22d8ac3288875cc925a1aeafc2d320b871581697dcdfa72b"} Nov 24 11:45:12 crc kubenswrapper[4789]: I1124 11:45:12.685631 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e6236001-96b0-4425-9f1f-eb84778d290a","Type":"ContainerStarted","Data":"5672c120970e11e22b32fa729620443e1172c6e5d4b15c0db2704c33bfa4285e"} Nov 24 11:45:12 crc kubenswrapper[4789]: I1124 11:45:12.686877 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"77772f5a-c498-46a2-861c-8145c554f262","Type":"ContainerStarted","Data":"9f650e9b33aa572068414186b09025d38bc7779d81adf9e44cc5bf1d3f3196cf"} Nov 24 11:45:12 crc kubenswrapper[4789]: I1124 11:45:12.688409 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-87j46" Nov 24 11:45:12 crc kubenswrapper[4789]: I1124 11:45:12.688423 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-87j46" event={"ID":"1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2","Type":"ContainerDied","Data":"e4161c2a075b6fba91ea14ba090ec88e921142dd7250c30b08ae7195599e6543"} Nov 24 11:45:12 crc kubenswrapper[4789]: I1124 11:45:12.690311 4789 generic.go:334] "Generic (PLEG): container finished" podID="a67a3b5d-1c99-4caa-8d70-f65c7b1926a1" containerID="3fbf7b39f472bb276b6e31741b589b27ec66a986f5720520b632e3d37ce993b2" exitCode=0 Nov 24 11:45:12 crc kubenswrapper[4789]: I1124 11:45:12.690618 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-2xtcq" event={"ID":"a67a3b5d-1c99-4caa-8d70-f65c7b1926a1","Type":"ContainerDied","Data":"3fbf7b39f472bb276b6e31741b589b27ec66a986f5720520b632e3d37ce993b2"} Nov 24 11:45:12 crc kubenswrapper[4789]: I1124 11:45:12.693415 4789 generic.go:334] "Generic (PLEG): container finished" podID="dec57c49-8f33-4945-902f-bc30c4f577a7" containerID="7c55f61789c0450b088fa46be2733db3eb46146647fd2f6b1653a1cc5424e8de" exitCode=0 Nov 24 11:45:12 crc kubenswrapper[4789]: I1124 11:45:12.693492 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-zjhkb" event={"ID":"dec57c49-8f33-4945-902f-bc30c4f577a7","Type":"ContainerDied","Data":"7c55f61789c0450b088fa46be2733db3eb46146647fd2f6b1653a1cc5424e8de"} Nov 24 11:45:12 crc kubenswrapper[4789]: I1124 11:45:12.695655 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e2c4a6c2-feeb-4afe-bfd8-9c79e65736e1","Type":"ContainerStarted","Data":"94869572205dae659d2f98d4e2b86acae8ea33c319393b41f194be7051a21d21"} Nov 24 11:45:12 crc kubenswrapper[4789]: I1124 11:45:12.782393 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6gndw"] Nov 24 11:45:12 crc kubenswrapper[4789]: I1124 11:45:12.782679 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6gndw"] Nov 24 11:45:12 crc kubenswrapper[4789]: I1124 11:45:12.809360 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-87j46"] Nov 24 11:45:12 crc kubenswrapper[4789]: I1124 11:45:12.813919 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-87j46"] Nov 24 11:45:12 crc kubenswrapper[4789]: I1124 11:45:12.996395 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 11:45:13 crc kubenswrapper[4789]: W1124 11:45:13.020216 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a18067c_f6d5_4650_897e_ec8e249b0e8b.slice/crio-5aaea531e9abb125e5c4ac1f517ba5deee3870803dc5acdaa55e3c397bf47f88 WatchSource:0}: Error finding container 5aaea531e9abb125e5c4ac1f517ba5deee3870803dc5acdaa55e3c397bf47f88: Status 404 returned error can't find the container with id 5aaea531e9abb125e5c4ac1f517ba5deee3870803dc5acdaa55e3c397bf47f88 Nov 24 11:45:13 crc kubenswrapper[4789]: I1124 11:45:13.707658 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-2xtcq" event={"ID":"a67a3b5d-1c99-4caa-8d70-f65c7b1926a1","Type":"ContainerStarted","Data":"95bd879df1136cc61d3d9a64a7309487c8aee470c7613713243634b3de4c0a16"} Nov 24 11:45:13 crc kubenswrapper[4789]: I1124 11:45:13.708894 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-2xtcq" Nov 24 11:45:13 crc kubenswrapper[4789]: I1124 11:45:13.711856 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"9a18067c-f6d5-4650-897e-ec8e249b0e8b","Type":"ContainerStarted","Data":"5aaea531e9abb125e5c4ac1f517ba5deee3870803dc5acdaa55e3c397bf47f88"} Nov 24 11:45:14 crc kubenswrapper[4789]: I1124 11:45:14.177165 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2" path="/var/lib/kubelet/pods/1c20dfdf-b0b2-4f8f-aaa8-d4ae97224af2/volumes" Nov 24 11:45:14 crc kubenswrapper[4789]: I1124 11:45:14.177554 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd2927f8-d0f3-444c-8d8a-51d76f298b85" path="/var/lib/kubelet/pods/fd2927f8-d0f3-444c-8d8a-51d76f298b85/volumes" Nov 24 11:45:15 crc kubenswrapper[4789]: I1124 11:45:15.035987 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-zjhkb" Nov 24 11:45:15 crc kubenswrapper[4789]: I1124 11:45:15.061713 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-2xtcq" podStartSLOduration=18.64906192 podStartE2EDuration="19.061675366s" podCreationTimestamp="2025-11-24 11:44:56 +0000 UTC" firstStartedPulling="2025-11-24 11:45:11.071813916 +0000 UTC m=+893.654285295" lastFinishedPulling="2025-11-24 11:45:11.484427362 +0000 UTC m=+894.066898741" observedRunningTime="2025-11-24 11:45:13.728424824 +0000 UTC m=+896.310896203" watchObservedRunningTime="2025-11-24 11:45:15.061675366 +0000 UTC m=+897.644146745" Nov 24 11:45:15 crc kubenswrapper[4789]: I1124 11:45:15.165515 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dec57c49-8f33-4945-902f-bc30c4f577a7-config-volume\") pod \"dec57c49-8f33-4945-902f-bc30c4f577a7\" (UID: \"dec57c49-8f33-4945-902f-bc30c4f577a7\") " Nov 24 11:45:15 crc kubenswrapper[4789]: I1124 11:45:15.165621 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dec57c49-8f33-4945-902f-bc30c4f577a7-secret-volume\") pod \"dec57c49-8f33-4945-902f-bc30c4f577a7\" (UID: \"dec57c49-8f33-4945-902f-bc30c4f577a7\") " Nov 24 11:45:15 crc kubenswrapper[4789]: I1124 11:45:15.165644 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9h4lr\" (UniqueName: \"kubernetes.io/projected/dec57c49-8f33-4945-902f-bc30c4f577a7-kube-api-access-9h4lr\") pod \"dec57c49-8f33-4945-902f-bc30c4f577a7\" (UID: \"dec57c49-8f33-4945-902f-bc30c4f577a7\") " Nov 24 11:45:15 crc kubenswrapper[4789]: I1124 11:45:15.167654 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dec57c49-8f33-4945-902f-bc30c4f577a7-config-volume" (OuterVolumeSpecName: "config-volume") pod "dec57c49-8f33-4945-902f-bc30c4f577a7" (UID: "dec57c49-8f33-4945-902f-bc30c4f577a7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:45:15 crc kubenswrapper[4789]: I1124 11:45:15.180888 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dec57c49-8f33-4945-902f-bc30c4f577a7-kube-api-access-9h4lr" (OuterVolumeSpecName: "kube-api-access-9h4lr") pod "dec57c49-8f33-4945-902f-bc30c4f577a7" (UID: "dec57c49-8f33-4945-902f-bc30c4f577a7"). InnerVolumeSpecName "kube-api-access-9h4lr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:45:15 crc kubenswrapper[4789]: I1124 11:45:15.188667 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dec57c49-8f33-4945-902f-bc30c4f577a7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "dec57c49-8f33-4945-902f-bc30c4f577a7" (UID: "dec57c49-8f33-4945-902f-bc30c4f577a7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:45:15 crc kubenswrapper[4789]: I1124 11:45:15.268933 4789 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dec57c49-8f33-4945-902f-bc30c4f577a7-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:15 crc kubenswrapper[4789]: I1124 11:45:15.268967 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9h4lr\" (UniqueName: \"kubernetes.io/projected/dec57c49-8f33-4945-902f-bc30c4f577a7-kube-api-access-9h4lr\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:15 crc kubenswrapper[4789]: I1124 11:45:15.268977 4789 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dec57c49-8f33-4945-902f-bc30c4f577a7-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:15 crc kubenswrapper[4789]: I1124 11:45:15.734486 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-zjhkb" event={"ID":"dec57c49-8f33-4945-902f-bc30c4f577a7","Type":"ContainerDied","Data":"c3ac9335b3a04e4e2bb8bc5a9bc66e7fa797c8e456e4654717b19e39e55c9f92"} Nov 24 11:45:15 crc kubenswrapper[4789]: I1124 11:45:15.734535 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3ac9335b3a04e4e2bb8bc5a9bc66e7fa797c8e456e4654717b19e39e55c9f92" Nov 24 11:45:15 crc kubenswrapper[4789]: I1124 11:45:15.734625 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-zjhkb" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.053786 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-fm6r6"] Nov 24 11:45:20 crc kubenswrapper[4789]: E1124 11:45:20.054566 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dec57c49-8f33-4945-902f-bc30c4f577a7" containerName="collect-profiles" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.054580 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="dec57c49-8f33-4945-902f-bc30c4f577a7" containerName="collect-profiles" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.054726 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="dec57c49-8f33-4945-902f-bc30c4f577a7" containerName="collect-profiles" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.055273 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-fm6r6" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.062647 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.091110 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-fm6r6"] Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.148319 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh9lq\" (UniqueName: \"kubernetes.io/projected/9d616a72-acce-41db-9107-142979aadf1f-kube-api-access-mh9lq\") pod \"ovn-controller-metrics-fm6r6\" (UID: \"9d616a72-acce-41db-9107-142979aadf1f\") " pod="openstack/ovn-controller-metrics-fm6r6" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.148366 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d616a72-acce-41db-9107-142979aadf1f-combined-ca-bundle\") pod \"ovn-controller-metrics-fm6r6\" (UID: \"9d616a72-acce-41db-9107-142979aadf1f\") " pod="openstack/ovn-controller-metrics-fm6r6" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.148392 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9d616a72-acce-41db-9107-142979aadf1f-ovn-rundir\") pod \"ovn-controller-metrics-fm6r6\" (UID: \"9d616a72-acce-41db-9107-142979aadf1f\") " pod="openstack/ovn-controller-metrics-fm6r6" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.148421 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9d616a72-acce-41db-9107-142979aadf1f-ovs-rundir\") pod \"ovn-controller-metrics-fm6r6\" (UID: \"9d616a72-acce-41db-9107-142979aadf1f\") " pod="openstack/ovn-controller-metrics-fm6r6" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.148440 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d616a72-acce-41db-9107-142979aadf1f-config\") pod \"ovn-controller-metrics-fm6r6\" (UID: \"9d616a72-acce-41db-9107-142979aadf1f\") " pod="openstack/ovn-controller-metrics-fm6r6" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.148488 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d616a72-acce-41db-9107-142979aadf1f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-fm6r6\" (UID: \"9d616a72-acce-41db-9107-142979aadf1f\") " pod="openstack/ovn-controller-metrics-fm6r6" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.162864 4789 patch_prober.go:28] interesting pod/machine-config-daemon-9czvn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.162910 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.222133 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2xtcq"] Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.222360 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-2xtcq" podUID="a67a3b5d-1c99-4caa-8d70-f65c7b1926a1" containerName="dnsmasq-dns" containerID="cri-o://95bd879df1136cc61d3d9a64a7309487c8aee470c7613713243634b3de4c0a16" gracePeriod=10 Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.225806 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-2xtcq" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.249364 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d616a72-acce-41db-9107-142979aadf1f-combined-ca-bundle\") pod \"ovn-controller-metrics-fm6r6\" (UID: \"9d616a72-acce-41db-9107-142979aadf1f\") " pod="openstack/ovn-controller-metrics-fm6r6" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.249412 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9d616a72-acce-41db-9107-142979aadf1f-ovn-rundir\") pod \"ovn-controller-metrics-fm6r6\" (UID: \"9d616a72-acce-41db-9107-142979aadf1f\") " pod="openstack/ovn-controller-metrics-fm6r6" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.249443 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9d616a72-acce-41db-9107-142979aadf1f-ovs-rundir\") pod \"ovn-controller-metrics-fm6r6\" (UID: \"9d616a72-acce-41db-9107-142979aadf1f\") " pod="openstack/ovn-controller-metrics-fm6r6" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.249529 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d616a72-acce-41db-9107-142979aadf1f-config\") pod \"ovn-controller-metrics-fm6r6\" (UID: \"9d616a72-acce-41db-9107-142979aadf1f\") " pod="openstack/ovn-controller-metrics-fm6r6" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.249562 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d616a72-acce-41db-9107-142979aadf1f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-fm6r6\" (UID: \"9d616a72-acce-41db-9107-142979aadf1f\") " pod="openstack/ovn-controller-metrics-fm6r6" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.249644 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh9lq\" (UniqueName: \"kubernetes.io/projected/9d616a72-acce-41db-9107-142979aadf1f-kube-api-access-mh9lq\") pod \"ovn-controller-metrics-fm6r6\" (UID: \"9d616a72-acce-41db-9107-142979aadf1f\") " pod="openstack/ovn-controller-metrics-fm6r6" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.251215 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9d616a72-acce-41db-9107-142979aadf1f-ovs-rundir\") pod \"ovn-controller-metrics-fm6r6\" (UID: \"9d616a72-acce-41db-9107-142979aadf1f\") " pod="openstack/ovn-controller-metrics-fm6r6" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.251294 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9d616a72-acce-41db-9107-142979aadf1f-ovn-rundir\") pod \"ovn-controller-metrics-fm6r6\" (UID: \"9d616a72-acce-41db-9107-142979aadf1f\") " pod="openstack/ovn-controller-metrics-fm6r6" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.251928 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d616a72-acce-41db-9107-142979aadf1f-config\") pod \"ovn-controller-metrics-fm6r6\" (UID: \"9d616a72-acce-41db-9107-142979aadf1f\") " pod="openstack/ovn-controller-metrics-fm6r6" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.277751 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d616a72-acce-41db-9107-142979aadf1f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-fm6r6\" (UID: \"9d616a72-acce-41db-9107-142979aadf1f\") " pod="openstack/ovn-controller-metrics-fm6r6" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.278245 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d616a72-acce-41db-9107-142979aadf1f-combined-ca-bundle\") pod \"ovn-controller-metrics-fm6r6\" (UID: \"9d616a72-acce-41db-9107-142979aadf1f\") " pod="openstack/ovn-controller-metrics-fm6r6" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.287274 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh9lq\" (UniqueName: \"kubernetes.io/projected/9d616a72-acce-41db-9107-142979aadf1f-kube-api-access-mh9lq\") pod \"ovn-controller-metrics-fm6r6\" (UID: \"9d616a72-acce-41db-9107-142979aadf1f\") " pod="openstack/ovn-controller-metrics-fm6r6" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.291070 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-z5gw7"] Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.292411 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-z5gw7" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.298342 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-z5gw7"] Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.300950 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.350876 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7qpk\" (UniqueName: \"kubernetes.io/projected/62ff3797-1edc-46bd-b5b6-d3f29b244806-kube-api-access-h7qpk\") pod \"dnsmasq-dns-5bf47b49b7-z5gw7\" (UID: \"62ff3797-1edc-46bd-b5b6-d3f29b244806\") " pod="openstack/dnsmasq-dns-5bf47b49b7-z5gw7" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.350956 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62ff3797-1edc-46bd-b5b6-d3f29b244806-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-z5gw7\" (UID: \"62ff3797-1edc-46bd-b5b6-d3f29b244806\") " pod="openstack/dnsmasq-dns-5bf47b49b7-z5gw7" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.350996 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62ff3797-1edc-46bd-b5b6-d3f29b244806-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-z5gw7\" (UID: \"62ff3797-1edc-46bd-b5b6-d3f29b244806\") " pod="openstack/dnsmasq-dns-5bf47b49b7-z5gw7" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.351113 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62ff3797-1edc-46bd-b5b6-d3f29b244806-config\") pod \"dnsmasq-dns-5bf47b49b7-z5gw7\" (UID: \"62ff3797-1edc-46bd-b5b6-d3f29b244806\") " pod="openstack/dnsmasq-dns-5bf47b49b7-z5gw7" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.373407 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-fm6r6" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.453083 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62ff3797-1edc-46bd-b5b6-d3f29b244806-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-z5gw7\" (UID: \"62ff3797-1edc-46bd-b5b6-d3f29b244806\") " pod="openstack/dnsmasq-dns-5bf47b49b7-z5gw7" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.453147 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62ff3797-1edc-46bd-b5b6-d3f29b244806-config\") pod \"dnsmasq-dns-5bf47b49b7-z5gw7\" (UID: \"62ff3797-1edc-46bd-b5b6-d3f29b244806\") " pod="openstack/dnsmasq-dns-5bf47b49b7-z5gw7" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.453206 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7qpk\" (UniqueName: \"kubernetes.io/projected/62ff3797-1edc-46bd-b5b6-d3f29b244806-kube-api-access-h7qpk\") pod \"dnsmasq-dns-5bf47b49b7-z5gw7\" (UID: \"62ff3797-1edc-46bd-b5b6-d3f29b244806\") " pod="openstack/dnsmasq-dns-5bf47b49b7-z5gw7" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.453261 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62ff3797-1edc-46bd-b5b6-d3f29b244806-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-z5gw7\" (UID: \"62ff3797-1edc-46bd-b5b6-d3f29b244806\") " pod="openstack/dnsmasq-dns-5bf47b49b7-z5gw7" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.453948 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62ff3797-1edc-46bd-b5b6-d3f29b244806-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-z5gw7\" (UID: \"62ff3797-1edc-46bd-b5b6-d3f29b244806\") " pod="openstack/dnsmasq-dns-5bf47b49b7-z5gw7" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.454014 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62ff3797-1edc-46bd-b5b6-d3f29b244806-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-z5gw7\" (UID: \"62ff3797-1edc-46bd-b5b6-d3f29b244806\") " pod="openstack/dnsmasq-dns-5bf47b49b7-z5gw7" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.454492 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62ff3797-1edc-46bd-b5b6-d3f29b244806-config\") pod \"dnsmasq-dns-5bf47b49b7-z5gw7\" (UID: \"62ff3797-1edc-46bd-b5b6-d3f29b244806\") " pod="openstack/dnsmasq-dns-5bf47b49b7-z5gw7" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.474658 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7qpk\" (UniqueName: \"kubernetes.io/projected/62ff3797-1edc-46bd-b5b6-d3f29b244806-kube-api-access-h7qpk\") pod \"dnsmasq-dns-5bf47b49b7-z5gw7\" (UID: \"62ff3797-1edc-46bd-b5b6-d3f29b244806\") " pod="openstack/dnsmasq-dns-5bf47b49b7-z5gw7" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.504046 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-d85r8"] Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.532160 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-8rnc2"] Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.534624 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-8rnc2" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.537745 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.550790 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-8rnc2"] Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.656779 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfmhb\" (UniqueName: \"kubernetes.io/projected/0cf50200-0128-4de2-a057-658b021fd401-kube-api-access-sfmhb\") pod \"dnsmasq-dns-8554648995-8rnc2\" (UID: \"0cf50200-0128-4de2-a057-658b021fd401\") " pod="openstack/dnsmasq-dns-8554648995-8rnc2" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.656897 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0cf50200-0128-4de2-a057-658b021fd401-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-8rnc2\" (UID: \"0cf50200-0128-4de2-a057-658b021fd401\") " pod="openstack/dnsmasq-dns-8554648995-8rnc2" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.656929 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0cf50200-0128-4de2-a057-658b021fd401-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-8rnc2\" (UID: \"0cf50200-0128-4de2-a057-658b021fd401\") " pod="openstack/dnsmasq-dns-8554648995-8rnc2" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.656967 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cf50200-0128-4de2-a057-658b021fd401-config\") pod \"dnsmasq-dns-8554648995-8rnc2\" (UID: \"0cf50200-0128-4de2-a057-658b021fd401\") " pod="openstack/dnsmasq-dns-8554648995-8rnc2" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.657036 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0cf50200-0128-4de2-a057-658b021fd401-dns-svc\") pod \"dnsmasq-dns-8554648995-8rnc2\" (UID: \"0cf50200-0128-4de2-a057-658b021fd401\") " pod="openstack/dnsmasq-dns-8554648995-8rnc2" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.686432 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-z5gw7" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.758296 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0cf50200-0128-4de2-a057-658b021fd401-dns-svc\") pod \"dnsmasq-dns-8554648995-8rnc2\" (UID: \"0cf50200-0128-4de2-a057-658b021fd401\") " pod="openstack/dnsmasq-dns-8554648995-8rnc2" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.758371 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfmhb\" (UniqueName: \"kubernetes.io/projected/0cf50200-0128-4de2-a057-658b021fd401-kube-api-access-sfmhb\") pod \"dnsmasq-dns-8554648995-8rnc2\" (UID: \"0cf50200-0128-4de2-a057-658b021fd401\") " pod="openstack/dnsmasq-dns-8554648995-8rnc2" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.758432 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0cf50200-0128-4de2-a057-658b021fd401-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-8rnc2\" (UID: \"0cf50200-0128-4de2-a057-658b021fd401\") " pod="openstack/dnsmasq-dns-8554648995-8rnc2" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.758506 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0cf50200-0128-4de2-a057-658b021fd401-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-8rnc2\" (UID: \"0cf50200-0128-4de2-a057-658b021fd401\") " pod="openstack/dnsmasq-dns-8554648995-8rnc2" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.758560 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cf50200-0128-4de2-a057-658b021fd401-config\") pod \"dnsmasq-dns-8554648995-8rnc2\" (UID: \"0cf50200-0128-4de2-a057-658b021fd401\") " pod="openstack/dnsmasq-dns-8554648995-8rnc2" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.759270 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0cf50200-0128-4de2-a057-658b021fd401-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-8rnc2\" (UID: \"0cf50200-0128-4de2-a057-658b021fd401\") " pod="openstack/dnsmasq-dns-8554648995-8rnc2" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.759535 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cf50200-0128-4de2-a057-658b021fd401-config\") pod \"dnsmasq-dns-8554648995-8rnc2\" (UID: \"0cf50200-0128-4de2-a057-658b021fd401\") " pod="openstack/dnsmasq-dns-8554648995-8rnc2" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.759732 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0cf50200-0128-4de2-a057-658b021fd401-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-8rnc2\" (UID: \"0cf50200-0128-4de2-a057-658b021fd401\") " pod="openstack/dnsmasq-dns-8554648995-8rnc2" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.760088 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0cf50200-0128-4de2-a057-658b021fd401-dns-svc\") pod \"dnsmasq-dns-8554648995-8rnc2\" (UID: \"0cf50200-0128-4de2-a057-658b021fd401\") " pod="openstack/dnsmasq-dns-8554648995-8rnc2" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.777750 4789 generic.go:334] "Generic (PLEG): container finished" podID="a67a3b5d-1c99-4caa-8d70-f65c7b1926a1" containerID="95bd879df1136cc61d3d9a64a7309487c8aee470c7613713243634b3de4c0a16" exitCode=0 Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.777800 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-2xtcq" event={"ID":"a67a3b5d-1c99-4caa-8d70-f65c7b1926a1","Type":"ContainerDied","Data":"95bd879df1136cc61d3d9a64a7309487c8aee470c7613713243634b3de4c0a16"} Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.781946 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfmhb\" (UniqueName: \"kubernetes.io/projected/0cf50200-0128-4de2-a057-658b021fd401-kube-api-access-sfmhb\") pod \"dnsmasq-dns-8554648995-8rnc2\" (UID: \"0cf50200-0128-4de2-a057-658b021fd401\") " pod="openstack/dnsmasq-dns-8554648995-8rnc2" Nov 24 11:45:20 crc kubenswrapper[4789]: I1124 11:45:20.855591 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-8rnc2" Nov 24 11:45:22 crc kubenswrapper[4789]: I1124 11:45:22.141380 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-2xtcq" podUID="a67a3b5d-1c99-4caa-8d70-f65c7b1926a1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.99:5353: connect: connection refused" Nov 24 11:45:23 crc kubenswrapper[4789]: I1124 11:45:23.999172 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2xtcq" Nov 24 11:45:24 crc kubenswrapper[4789]: I1124 11:45:24.119571 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7j5x\" (UniqueName: \"kubernetes.io/projected/a67a3b5d-1c99-4caa-8d70-f65c7b1926a1-kube-api-access-k7j5x\") pod \"a67a3b5d-1c99-4caa-8d70-f65c7b1926a1\" (UID: \"a67a3b5d-1c99-4caa-8d70-f65c7b1926a1\") " Nov 24 11:45:24 crc kubenswrapper[4789]: I1124 11:45:24.119746 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a67a3b5d-1c99-4caa-8d70-f65c7b1926a1-dns-svc\") pod \"a67a3b5d-1c99-4caa-8d70-f65c7b1926a1\" (UID: \"a67a3b5d-1c99-4caa-8d70-f65c7b1926a1\") " Nov 24 11:45:24 crc kubenswrapper[4789]: I1124 11:45:24.119792 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a67a3b5d-1c99-4caa-8d70-f65c7b1926a1-config\") pod \"a67a3b5d-1c99-4caa-8d70-f65c7b1926a1\" (UID: \"a67a3b5d-1c99-4caa-8d70-f65c7b1926a1\") " Nov 24 11:45:24 crc kubenswrapper[4789]: I1124 11:45:24.133416 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a67a3b5d-1c99-4caa-8d70-f65c7b1926a1-kube-api-access-k7j5x" (OuterVolumeSpecName: "kube-api-access-k7j5x") pod "a67a3b5d-1c99-4caa-8d70-f65c7b1926a1" (UID: "a67a3b5d-1c99-4caa-8d70-f65c7b1926a1"). InnerVolumeSpecName "kube-api-access-k7j5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:45:24 crc kubenswrapper[4789]: I1124 11:45:24.203510 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a67a3b5d-1c99-4caa-8d70-f65c7b1926a1-config" (OuterVolumeSpecName: "config") pod "a67a3b5d-1c99-4caa-8d70-f65c7b1926a1" (UID: "a67a3b5d-1c99-4caa-8d70-f65c7b1926a1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:45:24 crc kubenswrapper[4789]: I1124 11:45:24.204551 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a67a3b5d-1c99-4caa-8d70-f65c7b1926a1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a67a3b5d-1c99-4caa-8d70-f65c7b1926a1" (UID: "a67a3b5d-1c99-4caa-8d70-f65c7b1926a1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:45:24 crc kubenswrapper[4789]: I1124 11:45:24.221611 4789 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a67a3b5d-1c99-4caa-8d70-f65c7b1926a1-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:24 crc kubenswrapper[4789]: I1124 11:45:24.221641 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a67a3b5d-1c99-4caa-8d70-f65c7b1926a1-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:24 crc kubenswrapper[4789]: I1124 11:45:24.221651 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7j5x\" (UniqueName: \"kubernetes.io/projected/a67a3b5d-1c99-4caa-8d70-f65c7b1926a1-kube-api-access-k7j5x\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:24 crc kubenswrapper[4789]: I1124 11:45:24.679227 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-z5gw7"] Nov 24 11:45:24 crc kubenswrapper[4789]: I1124 11:45:24.691856 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-8rnc2"] Nov 24 11:45:24 crc kubenswrapper[4789]: I1124 11:45:24.699765 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-fm6r6"] Nov 24 11:45:24 crc kubenswrapper[4789]: I1124 11:45:24.805455 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-2xtcq" event={"ID":"a67a3b5d-1c99-4caa-8d70-f65c7b1926a1","Type":"ContainerDied","Data":"2f4186439f7ab9d2e93f427226a41e18fd30ae679f7417d76aaa0061bb5cf4a8"} Nov 24 11:45:24 crc kubenswrapper[4789]: I1124 11:45:24.805511 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2xtcq" Nov 24 11:45:24 crc kubenswrapper[4789]: I1124 11:45:24.805525 4789 scope.go:117] "RemoveContainer" containerID="95bd879df1136cc61d3d9a64a7309487c8aee470c7613713243634b3de4c0a16" Nov 24 11:45:24 crc kubenswrapper[4789]: I1124 11:45:24.821934 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e6236001-96b0-4425-9f1f-eb84778d290a","Type":"ContainerStarted","Data":"795d41cabed5fa8d454ca6bad286a343d1c2660d07e9989f8285d935f1272064"} Nov 24 11:45:24 crc kubenswrapper[4789]: I1124 11:45:24.824225 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-d85r8" event={"ID":"12a839b1-6b99-4bc4-a4b1-40db5cd77076","Type":"ContainerStarted","Data":"5ab45fdbfdacddad694b0bcbbc7442440f7cf70104bf652783bf77dc4be3634b"} Nov 24 11:45:24 crc kubenswrapper[4789]: I1124 11:45:24.824318 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-d85r8" podUID="12a839b1-6b99-4bc4-a4b1-40db5cd77076" containerName="dnsmasq-dns" containerID="cri-o://5ab45fdbfdacddad694b0bcbbc7442440f7cf70104bf652783bf77dc4be3634b" gracePeriod=10 Nov 24 11:45:24 crc kubenswrapper[4789]: I1124 11:45:24.824350 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-d85r8" Nov 24 11:45:24 crc kubenswrapper[4789]: I1124 11:45:24.846652 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2xtcq"] Nov 24 11:45:24 crc kubenswrapper[4789]: I1124 11:45:24.852649 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2xtcq"] Nov 24 11:45:24 crc kubenswrapper[4789]: I1124 11:45:24.882060 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-d85r8" podStartSLOduration=19.18772102 podStartE2EDuration="28.88204188s" podCreationTimestamp="2025-11-24 11:44:56 +0000 UTC" firstStartedPulling="2025-11-24 11:45:01.645656082 +0000 UTC m=+884.228127461" lastFinishedPulling="2025-11-24 11:45:11.339976942 +0000 UTC m=+893.922448321" observedRunningTime="2025-11-24 11:45:24.881071725 +0000 UTC m=+907.463543104" watchObservedRunningTime="2025-11-24 11:45:24.88204188 +0000 UTC m=+907.464513259" Nov 24 11:45:25 crc kubenswrapper[4789]: W1124 11:45:25.134413 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62ff3797_1edc_46bd_b5b6_d3f29b244806.slice/crio-f1865ee7b035a6b3e6e42bc2d4971c8e7fa1f76b34aab47c1acacc2f87b7772a WatchSource:0}: Error finding container f1865ee7b035a6b3e6e42bc2d4971c8e7fa1f76b34aab47c1acacc2f87b7772a: Status 404 returned error can't find the container with id f1865ee7b035a6b3e6e42bc2d4971c8e7fa1f76b34aab47c1acacc2f87b7772a Nov 24 11:45:25 crc kubenswrapper[4789]: W1124 11:45:25.141048 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d616a72_acce_41db_9107_142979aadf1f.slice/crio-1a6129045f1b832bd5c0ada8b4bf5bf13754e6284ac0dfd5a7e1b045c816a702 WatchSource:0}: Error finding container 1a6129045f1b832bd5c0ada8b4bf5bf13754e6284ac0dfd5a7e1b045c816a702: Status 404 returned error can't find the container with id 1a6129045f1b832bd5c0ada8b4bf5bf13754e6284ac0dfd5a7e1b045c816a702 Nov 24 11:45:25 crc kubenswrapper[4789]: W1124 11:45:25.151036 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0cf50200_0128_4de2_a057_658b021fd401.slice/crio-7c0d3c1d786654430f6ea918cbd0183b31a2975de8ffa7979243f1b8c8266a63 WatchSource:0}: Error finding container 7c0d3c1d786654430f6ea918cbd0183b31a2975de8ffa7979243f1b8c8266a63: Status 404 returned error can't find the container with id 7c0d3c1d786654430f6ea918cbd0183b31a2975de8ffa7979243f1b8c8266a63 Nov 24 11:45:25 crc kubenswrapper[4789]: I1124 11:45:25.160858 4789 scope.go:117] "RemoveContainer" containerID="3fbf7b39f472bb276b6e31741b589b27ec66a986f5720520b632e3d37ce993b2" Nov 24 11:45:25 crc kubenswrapper[4789]: E1124 11:45:25.164667 4789 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Nov 24 11:45:25 crc kubenswrapper[4789]: E1124 11:45:25.164716 4789 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Nov 24 11:45:25 crc kubenswrapper[4789]: E1124 11:45:25.164896 4789 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dkwlb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(e2c4a6c2-feeb-4afe-bfd8-9c79e65736e1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 11:45:25 crc kubenswrapper[4789]: E1124 11:45:25.166059 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="e2c4a6c2-feeb-4afe-bfd8-9c79e65736e1" Nov 24 11:45:25 crc kubenswrapper[4789]: I1124 11:45:25.520202 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-d85r8" Nov 24 11:45:25 crc kubenswrapper[4789]: I1124 11:45:25.650735 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12a839b1-6b99-4bc4-a4b1-40db5cd77076-dns-svc\") pod \"12a839b1-6b99-4bc4-a4b1-40db5cd77076\" (UID: \"12a839b1-6b99-4bc4-a4b1-40db5cd77076\") " Nov 24 11:45:25 crc kubenswrapper[4789]: I1124 11:45:25.650966 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j58x5\" (UniqueName: \"kubernetes.io/projected/12a839b1-6b99-4bc4-a4b1-40db5cd77076-kube-api-access-j58x5\") pod \"12a839b1-6b99-4bc4-a4b1-40db5cd77076\" (UID: \"12a839b1-6b99-4bc4-a4b1-40db5cd77076\") " Nov 24 11:45:25 crc kubenswrapper[4789]: I1124 11:45:25.651647 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12a839b1-6b99-4bc4-a4b1-40db5cd77076-config\") pod \"12a839b1-6b99-4bc4-a4b1-40db5cd77076\" (UID: \"12a839b1-6b99-4bc4-a4b1-40db5cd77076\") " Nov 24 11:45:25 crc kubenswrapper[4789]: I1124 11:45:25.693094 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12a839b1-6b99-4bc4-a4b1-40db5cd77076-kube-api-access-j58x5" (OuterVolumeSpecName: "kube-api-access-j58x5") pod "12a839b1-6b99-4bc4-a4b1-40db5cd77076" (UID: "12a839b1-6b99-4bc4-a4b1-40db5cd77076"). InnerVolumeSpecName "kube-api-access-j58x5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:45:25 crc kubenswrapper[4789]: I1124 11:45:25.754183 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j58x5\" (UniqueName: \"kubernetes.io/projected/12a839b1-6b99-4bc4-a4b1-40db5cd77076-kube-api-access-j58x5\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:25 crc kubenswrapper[4789]: I1124 11:45:25.833325 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"6583a8fe-db60-4eac-8bd0-32278517eff8","Type":"ContainerStarted","Data":"ced9f55c4c7090ff8071572f08bb4db6a71722fd6ec4634bfc3c0469aafacdc2"} Nov 24 11:45:25 crc kubenswrapper[4789]: I1124 11:45:25.833533 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 24 11:45:25 crc kubenswrapper[4789]: I1124 11:45:25.836601 4789 generic.go:334] "Generic (PLEG): container finished" podID="12a839b1-6b99-4bc4-a4b1-40db5cd77076" containerID="5ab45fdbfdacddad694b0bcbbc7442440f7cf70104bf652783bf77dc4be3634b" exitCode=0 Nov 24 11:45:25 crc kubenswrapper[4789]: I1124 11:45:25.836652 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-d85r8" Nov 24 11:45:25 crc kubenswrapper[4789]: I1124 11:45:25.836679 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-d85r8" event={"ID":"12a839b1-6b99-4bc4-a4b1-40db5cd77076","Type":"ContainerDied","Data":"5ab45fdbfdacddad694b0bcbbc7442440f7cf70104bf652783bf77dc4be3634b"} Nov 24 11:45:25 crc kubenswrapper[4789]: I1124 11:45:25.836705 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-d85r8" event={"ID":"12a839b1-6b99-4bc4-a4b1-40db5cd77076","Type":"ContainerDied","Data":"b09d3da4320fe085f2351f8c1414b0082049976fdf823332556fa5c26ec49e94"} Nov 24 11:45:25 crc kubenswrapper[4789]: I1124 11:45:25.836728 4789 scope.go:117] "RemoveContainer" containerID="5ab45fdbfdacddad694b0bcbbc7442440f7cf70104bf652783bf77dc4be3634b" Nov 24 11:45:25 crc kubenswrapper[4789]: I1124 11:45:25.840252 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-8rnc2" event={"ID":"0cf50200-0128-4de2-a057-658b021fd401","Type":"ContainerStarted","Data":"7c0d3c1d786654430f6ea918cbd0183b31a2975de8ffa7979243f1b8c8266a63"} Nov 24 11:45:25 crc kubenswrapper[4789]: I1124 11:45:25.854420 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=13.136295372 podStartE2EDuration="24.854394317s" podCreationTimestamp="2025-11-24 11:45:01 +0000 UTC" firstStartedPulling="2025-11-24 11:45:11.621876128 +0000 UTC m=+894.204347507" lastFinishedPulling="2025-11-24 11:45:23.339975063 +0000 UTC m=+905.922446452" observedRunningTime="2025-11-24 11:45:25.850449349 +0000 UTC m=+908.432920728" watchObservedRunningTime="2025-11-24 11:45:25.854394317 +0000 UTC m=+908.436865696" Nov 24 11:45:25 crc kubenswrapper[4789]: I1124 11:45:25.856277 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-z5gw7" event={"ID":"62ff3797-1edc-46bd-b5b6-d3f29b244806","Type":"ContainerStarted","Data":"f1865ee7b035a6b3e6e42bc2d4971c8e7fa1f76b34aab47c1acacc2f87b7772a"} Nov 24 11:45:25 crc kubenswrapper[4789]: I1124 11:45:25.857728 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-fm6r6" event={"ID":"9d616a72-acce-41db-9107-142979aadf1f","Type":"ContainerStarted","Data":"1a6129045f1b832bd5c0ada8b4bf5bf13754e6284ac0dfd5a7e1b045c816a702"} Nov 24 11:45:25 crc kubenswrapper[4789]: I1124 11:45:25.869851 4789 scope.go:117] "RemoveContainer" containerID="5400713c14d9fa7e22d8ac3288875cc925a1aeafc2d320b871581697dcdfa72b" Nov 24 11:45:25 crc kubenswrapper[4789]: E1124 11:45:25.869906 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="e2c4a6c2-feeb-4afe-bfd8-9c79e65736e1" Nov 24 11:45:25 crc kubenswrapper[4789]: I1124 11:45:25.899428 4789 scope.go:117] "RemoveContainer" containerID="5ab45fdbfdacddad694b0bcbbc7442440f7cf70104bf652783bf77dc4be3634b" Nov 24 11:45:25 crc kubenswrapper[4789]: E1124 11:45:25.900436 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ab45fdbfdacddad694b0bcbbc7442440f7cf70104bf652783bf77dc4be3634b\": container with ID starting with 5ab45fdbfdacddad694b0bcbbc7442440f7cf70104bf652783bf77dc4be3634b not found: ID does not exist" containerID="5ab45fdbfdacddad694b0bcbbc7442440f7cf70104bf652783bf77dc4be3634b" Nov 24 11:45:25 crc kubenswrapper[4789]: I1124 11:45:25.900505 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ab45fdbfdacddad694b0bcbbc7442440f7cf70104bf652783bf77dc4be3634b"} err="failed to get container status \"5ab45fdbfdacddad694b0bcbbc7442440f7cf70104bf652783bf77dc4be3634b\": rpc error: code = NotFound desc = could not find container \"5ab45fdbfdacddad694b0bcbbc7442440f7cf70104bf652783bf77dc4be3634b\": container with ID starting with 5ab45fdbfdacddad694b0bcbbc7442440f7cf70104bf652783bf77dc4be3634b not found: ID does not exist" Nov 24 11:45:25 crc kubenswrapper[4789]: I1124 11:45:25.900529 4789 scope.go:117] "RemoveContainer" containerID="5400713c14d9fa7e22d8ac3288875cc925a1aeafc2d320b871581697dcdfa72b" Nov 24 11:45:25 crc kubenswrapper[4789]: E1124 11:45:25.901050 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5400713c14d9fa7e22d8ac3288875cc925a1aeafc2d320b871581697dcdfa72b\": container with ID starting with 5400713c14d9fa7e22d8ac3288875cc925a1aeafc2d320b871581697dcdfa72b not found: ID does not exist" containerID="5400713c14d9fa7e22d8ac3288875cc925a1aeafc2d320b871581697dcdfa72b" Nov 24 11:45:25 crc kubenswrapper[4789]: I1124 11:45:25.901084 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5400713c14d9fa7e22d8ac3288875cc925a1aeafc2d320b871581697dcdfa72b"} err="failed to get container status \"5400713c14d9fa7e22d8ac3288875cc925a1aeafc2d320b871581697dcdfa72b\": rpc error: code = NotFound desc = could not find container \"5400713c14d9fa7e22d8ac3288875cc925a1aeafc2d320b871581697dcdfa72b\": container with ID starting with 5400713c14d9fa7e22d8ac3288875cc925a1aeafc2d320b871581697dcdfa72b not found: ID does not exist" Nov 24 11:45:26 crc kubenswrapper[4789]: I1124 11:45:26.172992 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12a839b1-6b99-4bc4-a4b1-40db5cd77076-config" (OuterVolumeSpecName: "config") pod "12a839b1-6b99-4bc4-a4b1-40db5cd77076" (UID: "12a839b1-6b99-4bc4-a4b1-40db5cd77076"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:45:26 crc kubenswrapper[4789]: I1124 11:45:26.184770 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a67a3b5d-1c99-4caa-8d70-f65c7b1926a1" path="/var/lib/kubelet/pods/a67a3b5d-1c99-4caa-8d70-f65c7b1926a1/volumes" Nov 24 11:45:26 crc kubenswrapper[4789]: I1124 11:45:26.264724 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12a839b1-6b99-4bc4-a4b1-40db5cd77076-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:26 crc kubenswrapper[4789]: I1124 11:45:26.457848 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12a839b1-6b99-4bc4-a4b1-40db5cd77076-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "12a839b1-6b99-4bc4-a4b1-40db5cd77076" (UID: "12a839b1-6b99-4bc4-a4b1-40db5cd77076"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:45:26 crc kubenswrapper[4789]: I1124 11:45:26.467433 4789 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12a839b1-6b99-4bc4-a4b1-40db5cd77076-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:26 crc kubenswrapper[4789]: I1124 11:45:26.774373 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-d85r8"] Nov 24 11:45:26 crc kubenswrapper[4789]: I1124 11:45:26.788527 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-d85r8"] Nov 24 11:45:26 crc kubenswrapper[4789]: I1124 11:45:26.876861 4789 generic.go:334] "Generic (PLEG): container finished" podID="62ff3797-1edc-46bd-b5b6-d3f29b244806" containerID="95cbe8b4a4d9e2b8b1737c4fc85e3654826342b550d5a4bdca9c0dc2f29ea76b" exitCode=0 Nov 24 11:45:26 crc kubenswrapper[4789]: I1124 11:45:26.876956 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-z5gw7" event={"ID":"62ff3797-1edc-46bd-b5b6-d3f29b244806","Type":"ContainerDied","Data":"95cbe8b4a4d9e2b8b1737c4fc85e3654826342b550d5a4bdca9c0dc2f29ea76b"} Nov 24 11:45:26 crc kubenswrapper[4789]: I1124 11:45:26.878588 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ad2c0f97-8696-425d-bd5a-42a24bee8297","Type":"ContainerStarted","Data":"a664d29c1069225aca624a58f7f6bad45e8a79e6507290fb266b0b826e03e680"} Nov 24 11:45:26 crc kubenswrapper[4789]: I1124 11:45:26.884219 4789 generic.go:334] "Generic (PLEG): container finished" podID="315d6386-62b1-4775-8185-2814e6b91bf5" containerID="5cb1053894ea7c83ef70e1c5249d67b90012e98f56f5ac672ee98ae8dad2493d" exitCode=0 Nov 24 11:45:26 crc kubenswrapper[4789]: I1124 11:45:26.884387 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4tbr6" event={"ID":"315d6386-62b1-4775-8185-2814e6b91bf5","Type":"ContainerDied","Data":"5cb1053894ea7c83ef70e1c5249d67b90012e98f56f5ac672ee98ae8dad2493d"} Nov 24 11:45:26 crc kubenswrapper[4789]: I1124 11:45:26.900777 4789 generic.go:334] "Generic (PLEG): container finished" podID="0cf50200-0128-4de2-a057-658b021fd401" containerID="e8b8c5f12ce742c6a39cd760b2d674767a25f3d1b5575f382d97e63511f94cda" exitCode=0 Nov 24 11:45:26 crc kubenswrapper[4789]: I1124 11:45:26.900846 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-8rnc2" event={"ID":"0cf50200-0128-4de2-a057-658b021fd401","Type":"ContainerDied","Data":"e8b8c5f12ce742c6a39cd760b2d674767a25f3d1b5575f382d97e63511f94cda"} Nov 24 11:45:26 crc kubenswrapper[4789]: I1124 11:45:26.930907 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"9a18067c-f6d5-4650-897e-ec8e249b0e8b","Type":"ContainerStarted","Data":"4518f2190ad569752b828c853451644b8f2092900348f8ffc45dd729b963bee5"} Nov 24 11:45:26 crc kubenswrapper[4789]: I1124 11:45:26.936929 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e","Type":"ContainerStarted","Data":"9a28c3039c74fe442ed3bbd247f272af8ce6498883c5cf3377a5ba815e084551"} Nov 24 11:45:26 crc kubenswrapper[4789]: I1124 11:45:26.954301 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9f6dd80c-3e9a-4ee6-83f8-40195165ec1c","Type":"ContainerStarted","Data":"52e4f2755353808bdf116bddea0b16e83aa99fcc2fbecfb6966495de35eb1c5d"} Nov 24 11:45:26 crc kubenswrapper[4789]: I1124 11:45:26.978942 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"77772f5a-c498-46a2-861c-8145c554f262","Type":"ContainerStarted","Data":"c8722e880c2657fccd18f58c54ac9ca3100bb19efd6f5f7b1be00675199cb7a7"} Nov 24 11:45:26 crc kubenswrapper[4789]: I1124 11:45:26.983620 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zh2n4" event={"ID":"c77484cd-66ed-4471-9136-5e44eadd28ad","Type":"ContainerStarted","Data":"1dc99ea56916fb67e93f56ebd7a6a22dc81de70ff6a94ee668cfe586a373c66e"} Nov 24 11:45:26 crc kubenswrapper[4789]: I1124 11:45:26.983857 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-zh2n4" Nov 24 11:45:27 crc kubenswrapper[4789]: I1124 11:45:27.991394 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4tbr6" event={"ID":"315d6386-62b1-4775-8185-2814e6b91bf5","Type":"ContainerStarted","Data":"5828d91b73d4b68e7d2cac77ea5c16b678b5058ed80067a7069f339cc92a687c"} Nov 24 11:45:27 crc kubenswrapper[4789]: I1124 11:45:27.991977 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4tbr6" event={"ID":"315d6386-62b1-4775-8185-2814e6b91bf5","Type":"ContainerStarted","Data":"857903f1e3533d2f63c75d872e3d4520a0cae4706ae8940747cdbb99d59e970f"} Nov 24 11:45:27 crc kubenswrapper[4789]: I1124 11:45:27.991996 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-4tbr6" Nov 24 11:45:27 crc kubenswrapper[4789]: I1124 11:45:27.997211 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-8rnc2" event={"ID":"0cf50200-0128-4de2-a057-658b021fd401","Type":"ContainerStarted","Data":"fa79289c7da33f582d47d841cae8f700ae9437f94870f31f5e9be1a732de90a8"} Nov 24 11:45:27 crc kubenswrapper[4789]: I1124 11:45:27.997332 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-8rnc2" Nov 24 11:45:27 crc kubenswrapper[4789]: I1124 11:45:27.999371 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-z5gw7" event={"ID":"62ff3797-1edc-46bd-b5b6-d3f29b244806","Type":"ContainerStarted","Data":"17a73aee1f13e5f8dc07cf567ad8ba68b7d55833349c64e263067876604fe505"} Nov 24 11:45:27 crc kubenswrapper[4789]: I1124 11:45:27.999538 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-z5gw7" Nov 24 11:45:28 crc kubenswrapper[4789]: I1124 11:45:28.011262 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-4tbr6" podStartSLOduration=9.1882216 podStartE2EDuration="21.011240795s" podCreationTimestamp="2025-11-24 11:45:07 +0000 UTC" firstStartedPulling="2025-11-24 11:45:12.343627375 +0000 UTC m=+894.926098764" lastFinishedPulling="2025-11-24 11:45:24.16664658 +0000 UTC m=+906.749117959" observedRunningTime="2025-11-24 11:45:28.005897105 +0000 UTC m=+910.588368484" watchObservedRunningTime="2025-11-24 11:45:28.011240795 +0000 UTC m=+910.593712174" Nov 24 11:45:28 crc kubenswrapper[4789]: I1124 11:45:28.017899 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-zh2n4" podStartSLOduration=9.694467819 podStartE2EDuration="22.017879795s" podCreationTimestamp="2025-11-24 11:45:06 +0000 UTC" firstStartedPulling="2025-11-24 11:45:11.843220944 +0000 UTC m=+894.425692323" lastFinishedPulling="2025-11-24 11:45:24.16663292 +0000 UTC m=+906.749104299" observedRunningTime="2025-11-24 11:45:27.0627169 +0000 UTC m=+909.645188279" watchObservedRunningTime="2025-11-24 11:45:28.017879795 +0000 UTC m=+910.600351174" Nov 24 11:45:28 crc kubenswrapper[4789]: I1124 11:45:28.035622 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-8rnc2" podStartSLOduration=8.035603564 podStartE2EDuration="8.035603564s" podCreationTimestamp="2025-11-24 11:45:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:45:28.030937382 +0000 UTC m=+910.613408781" watchObservedRunningTime="2025-11-24 11:45:28.035603564 +0000 UTC m=+910.618074943" Nov 24 11:45:28 crc kubenswrapper[4789]: I1124 11:45:28.051394 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-z5gw7" podStartSLOduration=8.051375766 podStartE2EDuration="8.051375766s" podCreationTimestamp="2025-11-24 11:45:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:45:28.047282597 +0000 UTC m=+910.629753996" watchObservedRunningTime="2025-11-24 11:45:28.051375766 +0000 UTC m=+910.633847145" Nov 24 11:45:28 crc kubenswrapper[4789]: I1124 11:45:28.190549 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12a839b1-6b99-4bc4-a4b1-40db5cd77076" path="/var/lib/kubelet/pods/12a839b1-6b99-4bc4-a4b1-40db5cd77076/volumes" Nov 24 11:45:29 crc kubenswrapper[4789]: I1124 11:45:29.010052 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-4tbr6" Nov 24 11:45:30 crc kubenswrapper[4789]: I1124 11:45:30.017576 4789 generic.go:334] "Generic (PLEG): container finished" podID="e6236001-96b0-4425-9f1f-eb84778d290a" containerID="795d41cabed5fa8d454ca6bad286a343d1c2660d07e9989f8285d935f1272064" exitCode=0 Nov 24 11:45:30 crc kubenswrapper[4789]: I1124 11:45:30.017856 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e6236001-96b0-4425-9f1f-eb84778d290a","Type":"ContainerDied","Data":"795d41cabed5fa8d454ca6bad286a343d1c2660d07e9989f8285d935f1272064"} Nov 24 11:45:30 crc kubenswrapper[4789]: I1124 11:45:30.019914 4789 generic.go:334] "Generic (PLEG): container finished" podID="9f6dd80c-3e9a-4ee6-83f8-40195165ec1c" containerID="52e4f2755353808bdf116bddea0b16e83aa99fcc2fbecfb6966495de35eb1c5d" exitCode=0 Nov 24 11:45:30 crc kubenswrapper[4789]: I1124 11:45:30.020739 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9f6dd80c-3e9a-4ee6-83f8-40195165ec1c","Type":"ContainerDied","Data":"52e4f2755353808bdf116bddea0b16e83aa99fcc2fbecfb6966495de35eb1c5d"} Nov 24 11:45:31 crc kubenswrapper[4789]: I1124 11:45:31.028712 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"9a18067c-f6d5-4650-897e-ec8e249b0e8b","Type":"ContainerStarted","Data":"52aaed80839504185d997f1c8d3a6e0e54129692c59720183dea18a0c7d857ac"} Nov 24 11:45:31 crc kubenswrapper[4789]: I1124 11:45:31.032594 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9f6dd80c-3e9a-4ee6-83f8-40195165ec1c","Type":"ContainerStarted","Data":"c6804f9331351f2d4fdca62ebf05b4f541847ec95b3825763712881a85f2ae05"} Nov 24 11:45:31 crc kubenswrapper[4789]: I1124 11:45:31.035565 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e6236001-96b0-4425-9f1f-eb84778d290a","Type":"ContainerStarted","Data":"1966084baf42a0e8e6ba4948d8b43ec08d706e9d831285fffcdceac5d212e259"} Nov 24 11:45:31 crc kubenswrapper[4789]: I1124 11:45:31.037874 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"77772f5a-c498-46a2-861c-8145c554f262","Type":"ContainerStarted","Data":"8a1eeaa66a0e5b9542eff1a7d295de30bce0f5fade89652aa39bd0028fc51e30"} Nov 24 11:45:31 crc kubenswrapper[4789]: I1124 11:45:31.039451 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-fm6r6" event={"ID":"9d616a72-acce-41db-9107-142979aadf1f","Type":"ContainerStarted","Data":"299d8e6d2baedb2332da900d07321216df4f4616080a5807c77f074a7ff39fc1"} Nov 24 11:45:31 crc kubenswrapper[4789]: I1124 11:45:31.055435 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=7.932131042 podStartE2EDuration="25.055419566s" podCreationTimestamp="2025-11-24 11:45:06 +0000 UTC" firstStartedPulling="2025-11-24 11:45:13.022393196 +0000 UTC m=+895.604864575" lastFinishedPulling="2025-11-24 11:45:30.14568172 +0000 UTC m=+912.728153099" observedRunningTime="2025-11-24 11:45:31.053162711 +0000 UTC m=+913.635634090" watchObservedRunningTime="2025-11-24 11:45:31.055419566 +0000 UTC m=+913.637890945" Nov 24 11:45:31 crc kubenswrapper[4789]: I1124 11:45:31.095251 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=19.398969423 podStartE2EDuration="32.095229979s" podCreationTimestamp="2025-11-24 11:44:59 +0000 UTC" firstStartedPulling="2025-11-24 11:45:11.629250301 +0000 UTC m=+894.211721680" lastFinishedPulling="2025-11-24 11:45:24.325510857 +0000 UTC m=+906.907982236" observedRunningTime="2025-11-24 11:45:31.090371542 +0000 UTC m=+913.672842931" watchObservedRunningTime="2025-11-24 11:45:31.095229979 +0000 UTC m=+913.677701358" Nov 24 11:45:31 crc kubenswrapper[4789]: I1124 11:45:31.116812 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=5.247017021 podStartE2EDuration="23.116795212s" podCreationTimestamp="2025-11-24 11:45:08 +0000 UTC" firstStartedPulling="2025-11-24 11:45:12.246384815 +0000 UTC m=+894.828856204" lastFinishedPulling="2025-11-24 11:45:30.116163016 +0000 UTC m=+912.698634395" observedRunningTime="2025-11-24 11:45:31.10888442 +0000 UTC m=+913.691355819" watchObservedRunningTime="2025-11-24 11:45:31.116795212 +0000 UTC m=+913.699266581" Nov 24 11:45:31 crc kubenswrapper[4789]: I1124 11:45:31.133833 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=21.421034552 podStartE2EDuration="33.133815174s" podCreationTimestamp="2025-11-24 11:44:58 +0000 UTC" firstStartedPulling="2025-11-24 11:45:11.770986144 +0000 UTC m=+894.353457523" lastFinishedPulling="2025-11-24 11:45:23.483766766 +0000 UTC m=+906.066238145" observedRunningTime="2025-11-24 11:45:31.131683322 +0000 UTC m=+913.714154701" watchObservedRunningTime="2025-11-24 11:45:31.133815174 +0000 UTC m=+913.716286553" Nov 24 11:45:31 crc kubenswrapper[4789]: I1124 11:45:31.199205 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 24 11:45:31 crc kubenswrapper[4789]: I1124 11:45:31.199286 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 24 11:45:31 crc kubenswrapper[4789]: I1124 11:45:31.529365 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 24 11:45:31 crc kubenswrapper[4789]: I1124 11:45:31.545367 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-fm6r6" podStartSLOduration=6.471638824 podStartE2EDuration="11.545347577s" podCreationTimestamp="2025-11-24 11:45:20 +0000 UTC" firstStartedPulling="2025-11-24 11:45:25.144020382 +0000 UTC m=+907.726491761" lastFinishedPulling="2025-11-24 11:45:30.217729135 +0000 UTC m=+912.800200514" observedRunningTime="2025-11-24 11:45:31.154426783 +0000 UTC m=+913.736898162" watchObservedRunningTime="2025-11-24 11:45:31.545347577 +0000 UTC m=+914.127818966" Nov 24 11:45:32 crc kubenswrapper[4789]: I1124 11:45:32.125048 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 24 11:45:32 crc kubenswrapper[4789]: I1124 11:45:32.231846 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 24 11:45:33 crc kubenswrapper[4789]: I1124 11:45:33.052368 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 24 11:45:33 crc kubenswrapper[4789]: I1124 11:45:33.205348 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 24 11:45:33 crc kubenswrapper[4789]: I1124 11:45:33.993755 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 24 11:45:34 crc kubenswrapper[4789]: I1124 11:45:34.041940 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 24 11:45:34 crc kubenswrapper[4789]: I1124 11:45:34.059317 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 24 11:45:34 crc kubenswrapper[4789]: I1124 11:45:34.099104 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 24 11:45:34 crc kubenswrapper[4789]: I1124 11:45:34.332708 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 24 11:45:34 crc kubenswrapper[4789]: E1124 11:45:34.333003 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12a839b1-6b99-4bc4-a4b1-40db5cd77076" containerName="init" Nov 24 11:45:34 crc kubenswrapper[4789]: I1124 11:45:34.333015 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="12a839b1-6b99-4bc4-a4b1-40db5cd77076" containerName="init" Nov 24 11:45:34 crc kubenswrapper[4789]: E1124 11:45:34.333023 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a67a3b5d-1c99-4caa-8d70-f65c7b1926a1" containerName="dnsmasq-dns" Nov 24 11:45:34 crc kubenswrapper[4789]: I1124 11:45:34.333029 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="a67a3b5d-1c99-4caa-8d70-f65c7b1926a1" containerName="dnsmasq-dns" Nov 24 11:45:34 crc kubenswrapper[4789]: E1124 11:45:34.333051 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a67a3b5d-1c99-4caa-8d70-f65c7b1926a1" containerName="init" Nov 24 11:45:34 crc kubenswrapper[4789]: I1124 11:45:34.333057 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="a67a3b5d-1c99-4caa-8d70-f65c7b1926a1" containerName="init" Nov 24 11:45:34 crc kubenswrapper[4789]: E1124 11:45:34.333073 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12a839b1-6b99-4bc4-a4b1-40db5cd77076" containerName="dnsmasq-dns" Nov 24 11:45:34 crc kubenswrapper[4789]: I1124 11:45:34.333079 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="12a839b1-6b99-4bc4-a4b1-40db5cd77076" containerName="dnsmasq-dns" Nov 24 11:45:34 crc kubenswrapper[4789]: I1124 11:45:34.333211 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="12a839b1-6b99-4bc4-a4b1-40db5cd77076" containerName="dnsmasq-dns" Nov 24 11:45:34 crc kubenswrapper[4789]: I1124 11:45:34.333225 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="a67a3b5d-1c99-4caa-8d70-f65c7b1926a1" containerName="dnsmasq-dns" Nov 24 11:45:34 crc kubenswrapper[4789]: I1124 11:45:34.334077 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 24 11:45:34 crc kubenswrapper[4789]: I1124 11:45:34.339930 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 24 11:45:34 crc kubenswrapper[4789]: I1124 11:45:34.340018 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-8qn8j" Nov 24 11:45:34 crc kubenswrapper[4789]: I1124 11:45:34.339946 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 24 11:45:34 crc kubenswrapper[4789]: I1124 11:45:34.342763 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 24 11:45:34 crc kubenswrapper[4789]: I1124 11:45:34.361285 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 24 11:45:34 crc kubenswrapper[4789]: I1124 11:45:34.462208 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dad9e06-c4ff-46fd-9864-a6cd81ad08db-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"8dad9e06-c4ff-46fd-9864-a6cd81ad08db\") " pod="openstack/ovn-northd-0" Nov 24 11:45:34 crc kubenswrapper[4789]: I1124 11:45:34.462264 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dad9e06-c4ff-46fd-9864-a6cd81ad08db-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8dad9e06-c4ff-46fd-9864-a6cd81ad08db\") " pod="openstack/ovn-northd-0" Nov 24 11:45:34 crc kubenswrapper[4789]: I1124 11:45:34.462293 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dad9e06-c4ff-46fd-9864-a6cd81ad08db-config\") pod \"ovn-northd-0\" (UID: \"8dad9e06-c4ff-46fd-9864-a6cd81ad08db\") " pod="openstack/ovn-northd-0" Nov 24 11:45:34 crc kubenswrapper[4789]: I1124 11:45:34.462407 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kj7p\" (UniqueName: \"kubernetes.io/projected/8dad9e06-c4ff-46fd-9864-a6cd81ad08db-kube-api-access-7kj7p\") pod \"ovn-northd-0\" (UID: \"8dad9e06-c4ff-46fd-9864-a6cd81ad08db\") " pod="openstack/ovn-northd-0" Nov 24 11:45:34 crc kubenswrapper[4789]: I1124 11:45:34.462475 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8dad9e06-c4ff-46fd-9864-a6cd81ad08db-scripts\") pod \"ovn-northd-0\" (UID: \"8dad9e06-c4ff-46fd-9864-a6cd81ad08db\") " pod="openstack/ovn-northd-0" Nov 24 11:45:34 crc kubenswrapper[4789]: I1124 11:45:34.462515 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8dad9e06-c4ff-46fd-9864-a6cd81ad08db-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8dad9e06-c4ff-46fd-9864-a6cd81ad08db\") " pod="openstack/ovn-northd-0" Nov 24 11:45:34 crc kubenswrapper[4789]: I1124 11:45:34.462661 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dad9e06-c4ff-46fd-9864-a6cd81ad08db-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"8dad9e06-c4ff-46fd-9864-a6cd81ad08db\") " pod="openstack/ovn-northd-0" Nov 24 11:45:34 crc kubenswrapper[4789]: I1124 11:45:34.563730 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dad9e06-c4ff-46fd-9864-a6cd81ad08db-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"8dad9e06-c4ff-46fd-9864-a6cd81ad08db\") " pod="openstack/ovn-northd-0" Nov 24 11:45:34 crc kubenswrapper[4789]: I1124 11:45:34.563796 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dad9e06-c4ff-46fd-9864-a6cd81ad08db-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8dad9e06-c4ff-46fd-9864-a6cd81ad08db\") " pod="openstack/ovn-northd-0" Nov 24 11:45:34 crc kubenswrapper[4789]: I1124 11:45:34.563839 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dad9e06-c4ff-46fd-9864-a6cd81ad08db-config\") pod \"ovn-northd-0\" (UID: \"8dad9e06-c4ff-46fd-9864-a6cd81ad08db\") " pod="openstack/ovn-northd-0" Nov 24 11:45:34 crc kubenswrapper[4789]: I1124 11:45:34.563861 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kj7p\" (UniqueName: \"kubernetes.io/projected/8dad9e06-c4ff-46fd-9864-a6cd81ad08db-kube-api-access-7kj7p\") pod \"ovn-northd-0\" (UID: \"8dad9e06-c4ff-46fd-9864-a6cd81ad08db\") " pod="openstack/ovn-northd-0" Nov 24 11:45:34 crc kubenswrapper[4789]: I1124 11:45:34.563886 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8dad9e06-c4ff-46fd-9864-a6cd81ad08db-scripts\") pod \"ovn-northd-0\" (UID: \"8dad9e06-c4ff-46fd-9864-a6cd81ad08db\") " pod="openstack/ovn-northd-0" Nov 24 11:45:34 crc kubenswrapper[4789]: I1124 11:45:34.563909 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8dad9e06-c4ff-46fd-9864-a6cd81ad08db-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8dad9e06-c4ff-46fd-9864-a6cd81ad08db\") " pod="openstack/ovn-northd-0" Nov 24 11:45:34 crc kubenswrapper[4789]: I1124 11:45:34.563983 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dad9e06-c4ff-46fd-9864-a6cd81ad08db-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"8dad9e06-c4ff-46fd-9864-a6cd81ad08db\") " pod="openstack/ovn-northd-0" Nov 24 11:45:34 crc kubenswrapper[4789]: I1124 11:45:34.565144 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8dad9e06-c4ff-46fd-9864-a6cd81ad08db-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8dad9e06-c4ff-46fd-9864-a6cd81ad08db\") " pod="openstack/ovn-northd-0" Nov 24 11:45:34 crc kubenswrapper[4789]: I1124 11:45:34.565348 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dad9e06-c4ff-46fd-9864-a6cd81ad08db-config\") pod \"ovn-northd-0\" (UID: \"8dad9e06-c4ff-46fd-9864-a6cd81ad08db\") " pod="openstack/ovn-northd-0" Nov 24 11:45:34 crc kubenswrapper[4789]: I1124 11:45:34.565809 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8dad9e06-c4ff-46fd-9864-a6cd81ad08db-scripts\") pod \"ovn-northd-0\" (UID: \"8dad9e06-c4ff-46fd-9864-a6cd81ad08db\") " pod="openstack/ovn-northd-0" Nov 24 11:45:34 crc kubenswrapper[4789]: I1124 11:45:34.569784 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dad9e06-c4ff-46fd-9864-a6cd81ad08db-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8dad9e06-c4ff-46fd-9864-a6cd81ad08db\") " pod="openstack/ovn-northd-0" Nov 24 11:45:34 crc kubenswrapper[4789]: I1124 11:45:34.570765 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dad9e06-c4ff-46fd-9864-a6cd81ad08db-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"8dad9e06-c4ff-46fd-9864-a6cd81ad08db\") " pod="openstack/ovn-northd-0" Nov 24 11:45:34 crc kubenswrapper[4789]: I1124 11:45:34.581280 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dad9e06-c4ff-46fd-9864-a6cd81ad08db-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"8dad9e06-c4ff-46fd-9864-a6cd81ad08db\") " pod="openstack/ovn-northd-0" Nov 24 11:45:34 crc kubenswrapper[4789]: I1124 11:45:34.588422 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kj7p\" (UniqueName: \"kubernetes.io/projected/8dad9e06-c4ff-46fd-9864-a6cd81ad08db-kube-api-access-7kj7p\") pod \"ovn-northd-0\" (UID: \"8dad9e06-c4ff-46fd-9864-a6cd81ad08db\") " pod="openstack/ovn-northd-0" Nov 24 11:45:34 crc kubenswrapper[4789]: I1124 11:45:34.661940 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 24 11:45:34 crc kubenswrapper[4789]: E1124 11:45:34.940838 4789 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.184:42742->38.102.83.184:35431: write tcp 38.102.83.184:42742->38.102.83.184:35431: write: broken pipe Nov 24 11:45:35 crc kubenswrapper[4789]: I1124 11:45:35.148449 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 24 11:45:35 crc kubenswrapper[4789]: W1124 11:45:35.156200 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8dad9e06_c4ff_46fd_9864_a6cd81ad08db.slice/crio-9df7fe6aa9d148f0d6034686ac72d704780f5ece0763a4156abd0b7f4145701c WatchSource:0}: Error finding container 9df7fe6aa9d148f0d6034686ac72d704780f5ece0763a4156abd0b7f4145701c: Status 404 returned error can't find the container with id 9df7fe6aa9d148f0d6034686ac72d704780f5ece0763a4156abd0b7f4145701c Nov 24 11:45:35 crc kubenswrapper[4789]: I1124 11:45:35.688587 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bf47b49b7-z5gw7" Nov 24 11:45:35 crc kubenswrapper[4789]: I1124 11:45:35.858891 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-8rnc2" Nov 24 11:45:35 crc kubenswrapper[4789]: I1124 11:45:35.914428 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-z5gw7"] Nov 24 11:45:36 crc kubenswrapper[4789]: I1124 11:45:36.074722 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8dad9e06-c4ff-46fd-9864-a6cd81ad08db","Type":"ContainerStarted","Data":"9df7fe6aa9d148f0d6034686ac72d704780f5ece0763a4156abd0b7f4145701c"} Nov 24 11:45:36 crc kubenswrapper[4789]: I1124 11:45:36.074860 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-z5gw7" podUID="62ff3797-1edc-46bd-b5b6-d3f29b244806" containerName="dnsmasq-dns" containerID="cri-o://17a73aee1f13e5f8dc07cf567ad8ba68b7d55833349c64e263067876604fe505" gracePeriod=10 Nov 24 11:45:36 crc kubenswrapper[4789]: I1124 11:45:36.578862 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-z5gw7" Nov 24 11:45:36 crc kubenswrapper[4789]: I1124 11:45:36.707267 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62ff3797-1edc-46bd-b5b6-d3f29b244806-config\") pod \"62ff3797-1edc-46bd-b5b6-d3f29b244806\" (UID: \"62ff3797-1edc-46bd-b5b6-d3f29b244806\") " Nov 24 11:45:36 crc kubenswrapper[4789]: I1124 11:45:36.708568 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7qpk\" (UniqueName: \"kubernetes.io/projected/62ff3797-1edc-46bd-b5b6-d3f29b244806-kube-api-access-h7qpk\") pod \"62ff3797-1edc-46bd-b5b6-d3f29b244806\" (UID: \"62ff3797-1edc-46bd-b5b6-d3f29b244806\") " Nov 24 11:45:36 crc kubenswrapper[4789]: I1124 11:45:36.708771 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62ff3797-1edc-46bd-b5b6-d3f29b244806-ovsdbserver-nb\") pod \"62ff3797-1edc-46bd-b5b6-d3f29b244806\" (UID: \"62ff3797-1edc-46bd-b5b6-d3f29b244806\") " Nov 24 11:45:36 crc kubenswrapper[4789]: I1124 11:45:36.708859 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62ff3797-1edc-46bd-b5b6-d3f29b244806-dns-svc\") pod \"62ff3797-1edc-46bd-b5b6-d3f29b244806\" (UID: \"62ff3797-1edc-46bd-b5b6-d3f29b244806\") " Nov 24 11:45:36 crc kubenswrapper[4789]: I1124 11:45:36.712105 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62ff3797-1edc-46bd-b5b6-d3f29b244806-kube-api-access-h7qpk" (OuterVolumeSpecName: "kube-api-access-h7qpk") pod "62ff3797-1edc-46bd-b5b6-d3f29b244806" (UID: "62ff3797-1edc-46bd-b5b6-d3f29b244806"). InnerVolumeSpecName "kube-api-access-h7qpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:45:36 crc kubenswrapper[4789]: I1124 11:45:36.750671 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62ff3797-1edc-46bd-b5b6-d3f29b244806-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "62ff3797-1edc-46bd-b5b6-d3f29b244806" (UID: "62ff3797-1edc-46bd-b5b6-d3f29b244806"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:45:36 crc kubenswrapper[4789]: I1124 11:45:36.762595 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62ff3797-1edc-46bd-b5b6-d3f29b244806-config" (OuterVolumeSpecName: "config") pod "62ff3797-1edc-46bd-b5b6-d3f29b244806" (UID: "62ff3797-1edc-46bd-b5b6-d3f29b244806"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:45:36 crc kubenswrapper[4789]: I1124 11:45:36.764490 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62ff3797-1edc-46bd-b5b6-d3f29b244806-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "62ff3797-1edc-46bd-b5b6-d3f29b244806" (UID: "62ff3797-1edc-46bd-b5b6-d3f29b244806"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:45:36 crc kubenswrapper[4789]: I1124 11:45:36.810618 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62ff3797-1edc-46bd-b5b6-d3f29b244806-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:36 crc kubenswrapper[4789]: I1124 11:45:36.810674 4789 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62ff3797-1edc-46bd-b5b6-d3f29b244806-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:36 crc kubenswrapper[4789]: I1124 11:45:36.810688 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62ff3797-1edc-46bd-b5b6-d3f29b244806-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:36 crc kubenswrapper[4789]: I1124 11:45:36.810700 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7qpk\" (UniqueName: \"kubernetes.io/projected/62ff3797-1edc-46bd-b5b6-d3f29b244806-kube-api-access-h7qpk\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:37 crc kubenswrapper[4789]: I1124 11:45:37.083831 4789 generic.go:334] "Generic (PLEG): container finished" podID="62ff3797-1edc-46bd-b5b6-d3f29b244806" containerID="17a73aee1f13e5f8dc07cf567ad8ba68b7d55833349c64e263067876604fe505" exitCode=0 Nov 24 11:45:37 crc kubenswrapper[4789]: I1124 11:45:37.083888 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-z5gw7" Nov 24 11:45:37 crc kubenswrapper[4789]: I1124 11:45:37.083908 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-z5gw7" event={"ID":"62ff3797-1edc-46bd-b5b6-d3f29b244806","Type":"ContainerDied","Data":"17a73aee1f13e5f8dc07cf567ad8ba68b7d55833349c64e263067876604fe505"} Nov 24 11:45:37 crc kubenswrapper[4789]: I1124 11:45:37.084390 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-z5gw7" event={"ID":"62ff3797-1edc-46bd-b5b6-d3f29b244806","Type":"ContainerDied","Data":"f1865ee7b035a6b3e6e42bc2d4971c8e7fa1f76b34aab47c1acacc2f87b7772a"} Nov 24 11:45:37 crc kubenswrapper[4789]: I1124 11:45:37.084425 4789 scope.go:117] "RemoveContainer" containerID="17a73aee1f13e5f8dc07cf567ad8ba68b7d55833349c64e263067876604fe505" Nov 24 11:45:37 crc kubenswrapper[4789]: I1124 11:45:37.086348 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8dad9e06-c4ff-46fd-9864-a6cd81ad08db","Type":"ContainerStarted","Data":"b90d1319aae86e31f6ba03c6ea72c54c6045d2bc5059e84279349d7637e1f623"} Nov 24 11:45:37 crc kubenswrapper[4789]: I1124 11:45:37.086368 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8dad9e06-c4ff-46fd-9864-a6cd81ad08db","Type":"ContainerStarted","Data":"2e23d48a60d92324f46d5ad0326a40633a24820d542f52fe711c3f91bfb56426"} Nov 24 11:45:37 crc kubenswrapper[4789]: I1124 11:45:37.086612 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 24 11:45:37 crc kubenswrapper[4789]: I1124 11:45:37.104615 4789 scope.go:117] "RemoveContainer" containerID="95cbe8b4a4d9e2b8b1737c4fc85e3654826342b550d5a4bdca9c0dc2f29ea76b" Nov 24 11:45:37 crc kubenswrapper[4789]: I1124 11:45:37.117725 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.945668092 podStartE2EDuration="3.117703078s" podCreationTimestamp="2025-11-24 11:45:34 +0000 UTC" firstStartedPulling="2025-11-24 11:45:35.158000102 +0000 UTC m=+917.740471481" lastFinishedPulling="2025-11-24 11:45:36.330035088 +0000 UTC m=+918.912506467" observedRunningTime="2025-11-24 11:45:37.117031072 +0000 UTC m=+919.699502451" watchObservedRunningTime="2025-11-24 11:45:37.117703078 +0000 UTC m=+919.700174497" Nov 24 11:45:37 crc kubenswrapper[4789]: I1124 11:45:37.124115 4789 scope.go:117] "RemoveContainer" containerID="17a73aee1f13e5f8dc07cf567ad8ba68b7d55833349c64e263067876604fe505" Nov 24 11:45:37 crc kubenswrapper[4789]: E1124 11:45:37.125766 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17a73aee1f13e5f8dc07cf567ad8ba68b7d55833349c64e263067876604fe505\": container with ID starting with 17a73aee1f13e5f8dc07cf567ad8ba68b7d55833349c64e263067876604fe505 not found: ID does not exist" containerID="17a73aee1f13e5f8dc07cf567ad8ba68b7d55833349c64e263067876604fe505" Nov 24 11:45:37 crc kubenswrapper[4789]: I1124 11:45:37.125883 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17a73aee1f13e5f8dc07cf567ad8ba68b7d55833349c64e263067876604fe505"} err="failed to get container status \"17a73aee1f13e5f8dc07cf567ad8ba68b7d55833349c64e263067876604fe505\": rpc error: code = NotFound desc = could not find container \"17a73aee1f13e5f8dc07cf567ad8ba68b7d55833349c64e263067876604fe505\": container with ID starting with 17a73aee1f13e5f8dc07cf567ad8ba68b7d55833349c64e263067876604fe505 not found: ID does not exist" Nov 24 11:45:37 crc kubenswrapper[4789]: I1124 11:45:37.125971 4789 scope.go:117] "RemoveContainer" containerID="95cbe8b4a4d9e2b8b1737c4fc85e3654826342b550d5a4bdca9c0dc2f29ea76b" Nov 24 11:45:37 crc kubenswrapper[4789]: E1124 11:45:37.126668 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95cbe8b4a4d9e2b8b1737c4fc85e3654826342b550d5a4bdca9c0dc2f29ea76b\": container with ID starting with 95cbe8b4a4d9e2b8b1737c4fc85e3654826342b550d5a4bdca9c0dc2f29ea76b not found: ID does not exist" containerID="95cbe8b4a4d9e2b8b1737c4fc85e3654826342b550d5a4bdca9c0dc2f29ea76b" Nov 24 11:45:37 crc kubenswrapper[4789]: I1124 11:45:37.126755 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95cbe8b4a4d9e2b8b1737c4fc85e3654826342b550d5a4bdca9c0dc2f29ea76b"} err="failed to get container status \"95cbe8b4a4d9e2b8b1737c4fc85e3654826342b550d5a4bdca9c0dc2f29ea76b\": rpc error: code = NotFound desc = could not find container \"95cbe8b4a4d9e2b8b1737c4fc85e3654826342b550d5a4bdca9c0dc2f29ea76b\": container with ID starting with 95cbe8b4a4d9e2b8b1737c4fc85e3654826342b550d5a4bdca9c0dc2f29ea76b not found: ID does not exist" Nov 24 11:45:37 crc kubenswrapper[4789]: I1124 11:45:37.160559 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-z5gw7"] Nov 24 11:45:37 crc kubenswrapper[4789]: I1124 11:45:37.167277 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-z5gw7"] Nov 24 11:45:37 crc kubenswrapper[4789]: I1124 11:45:37.360641 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 24 11:45:37 crc kubenswrapper[4789]: I1124 11:45:37.440524 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 24 11:45:38 crc kubenswrapper[4789]: I1124 11:45:38.186626 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62ff3797-1edc-46bd-b5b6-d3f29b244806" path="/var/lib/kubelet/pods/62ff3797-1edc-46bd-b5b6-d3f29b244806/volumes" Nov 24 11:45:39 crc kubenswrapper[4789]: I1124 11:45:39.691401 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 24 11:45:39 crc kubenswrapper[4789]: I1124 11:45:39.692745 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 24 11:45:39 crc kubenswrapper[4789]: I1124 11:45:39.772514 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 24 11:45:40 crc kubenswrapper[4789]: I1124 11:45:40.216330 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.124559 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e2c4a6c2-feeb-4afe-bfd8-9c79e65736e1","Type":"ContainerStarted","Data":"9a1d2b3e3f422c34a2e01942cf5675ee421c148d68e52b751d3037eccc50f6c5"} Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.126613 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.232020 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=9.6673896 podStartE2EDuration="38.232001708s" podCreationTimestamp="2025-11-24 11:45:03 +0000 UTC" firstStartedPulling="2025-11-24 11:45:11.986653359 +0000 UTC m=+894.569124738" lastFinishedPulling="2025-11-24 11:45:40.551265427 +0000 UTC m=+923.133736846" observedRunningTime="2025-11-24 11:45:41.168203624 +0000 UTC m=+923.750675083" watchObservedRunningTime="2025-11-24 11:45:41.232001708 +0000 UTC m=+923.814473097" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.237959 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-zqv9q"] Nov 24 11:45:41 crc kubenswrapper[4789]: E1124 11:45:41.238555 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62ff3797-1edc-46bd-b5b6-d3f29b244806" containerName="init" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.238670 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="62ff3797-1edc-46bd-b5b6-d3f29b244806" containerName="init" Nov 24 11:45:41 crc kubenswrapper[4789]: E1124 11:45:41.238775 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62ff3797-1edc-46bd-b5b6-d3f29b244806" containerName="dnsmasq-dns" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.238843 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="62ff3797-1edc-46bd-b5b6-d3f29b244806" containerName="dnsmasq-dns" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.239115 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="62ff3797-1edc-46bd-b5b6-d3f29b244806" containerName="dnsmasq-dns" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.239845 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zqv9q" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.269068 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-e52f-account-create-5n95s"] Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.270349 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e52f-account-create-5n95s" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.273813 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.283244 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-zqv9q"] Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.293074 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh6rc\" (UniqueName: \"kubernetes.io/projected/b8e0bf0e-258d-41c3-af5b-86b1413d0d9b-kube-api-access-lh6rc\") pod \"keystone-db-create-zqv9q\" (UID: \"b8e0bf0e-258d-41c3-af5b-86b1413d0d9b\") " pod="openstack/keystone-db-create-zqv9q" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.293372 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8e0bf0e-258d-41c3-af5b-86b1413d0d9b-operator-scripts\") pod \"keystone-db-create-zqv9q\" (UID: \"b8e0bf0e-258d-41c3-af5b-86b1413d0d9b\") " pod="openstack/keystone-db-create-zqv9q" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.293514 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dzhg\" (UniqueName: \"kubernetes.io/projected/a18094e0-852b-4365-b8c8-a65185dc446e-kube-api-access-2dzhg\") pod \"keystone-e52f-account-create-5n95s\" (UID: \"a18094e0-852b-4365-b8c8-a65185dc446e\") " pod="openstack/keystone-e52f-account-create-5n95s" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.293740 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a18094e0-852b-4365-b8c8-a65185dc446e-operator-scripts\") pod \"keystone-e52f-account-create-5n95s\" (UID: \"a18094e0-852b-4365-b8c8-a65185dc446e\") " pod="openstack/keystone-e52f-account-create-5n95s" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.299397 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e52f-account-create-5n95s"] Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.397278 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a18094e0-852b-4365-b8c8-a65185dc446e-operator-scripts\") pod \"keystone-e52f-account-create-5n95s\" (UID: \"a18094e0-852b-4365-b8c8-a65185dc446e\") " pod="openstack/keystone-e52f-account-create-5n95s" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.397597 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh6rc\" (UniqueName: \"kubernetes.io/projected/b8e0bf0e-258d-41c3-af5b-86b1413d0d9b-kube-api-access-lh6rc\") pod \"keystone-db-create-zqv9q\" (UID: \"b8e0bf0e-258d-41c3-af5b-86b1413d0d9b\") " pod="openstack/keystone-db-create-zqv9q" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.397626 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8e0bf0e-258d-41c3-af5b-86b1413d0d9b-operator-scripts\") pod \"keystone-db-create-zqv9q\" (UID: \"b8e0bf0e-258d-41c3-af5b-86b1413d0d9b\") " pod="openstack/keystone-db-create-zqv9q" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.397646 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dzhg\" (UniqueName: \"kubernetes.io/projected/a18094e0-852b-4365-b8c8-a65185dc446e-kube-api-access-2dzhg\") pod \"keystone-e52f-account-create-5n95s\" (UID: \"a18094e0-852b-4365-b8c8-a65185dc446e\") " pod="openstack/keystone-e52f-account-create-5n95s" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.398135 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a18094e0-852b-4365-b8c8-a65185dc446e-operator-scripts\") pod \"keystone-e52f-account-create-5n95s\" (UID: \"a18094e0-852b-4365-b8c8-a65185dc446e\") " pod="openstack/keystone-e52f-account-create-5n95s" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.398825 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8e0bf0e-258d-41c3-af5b-86b1413d0d9b-operator-scripts\") pod \"keystone-db-create-zqv9q\" (UID: \"b8e0bf0e-258d-41c3-af5b-86b1413d0d9b\") " pod="openstack/keystone-db-create-zqv9q" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.421215 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dzhg\" (UniqueName: \"kubernetes.io/projected/a18094e0-852b-4365-b8c8-a65185dc446e-kube-api-access-2dzhg\") pod \"keystone-e52f-account-create-5n95s\" (UID: \"a18094e0-852b-4365-b8c8-a65185dc446e\") " pod="openstack/keystone-e52f-account-create-5n95s" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.442903 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh6rc\" (UniqueName: \"kubernetes.io/projected/b8e0bf0e-258d-41c3-af5b-86b1413d0d9b-kube-api-access-lh6rc\") pod \"keystone-db-create-zqv9q\" (UID: \"b8e0bf0e-258d-41c3-af5b-86b1413d0d9b\") " pod="openstack/keystone-db-create-zqv9q" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.447046 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-jsdzm"] Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.448168 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-jsdzm" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.461426 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-jsdzm"] Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.501278 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvzjx\" (UniqueName: \"kubernetes.io/projected/b6a46f49-9d70-4876-a8ba-070a44606a93-kube-api-access-dvzjx\") pod \"placement-db-create-jsdzm\" (UID: \"b6a46f49-9d70-4876-a8ba-070a44606a93\") " pod="openstack/placement-db-create-jsdzm" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.501338 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6a46f49-9d70-4876-a8ba-070a44606a93-operator-scripts\") pod \"placement-db-create-jsdzm\" (UID: \"b6a46f49-9d70-4876-a8ba-070a44606a93\") " pod="openstack/placement-db-create-jsdzm" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.549642 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-cd25-account-create-56jpk"] Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.550701 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cd25-account-create-56jpk" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.553915 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.557828 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zqv9q" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.568125 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-cd25-account-create-56jpk"] Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.611651 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e52f-account-create-5n95s" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.613151 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5592n\" (UniqueName: \"kubernetes.io/projected/50d81cc5-1abb-4c0a-9b4c-e9d69b0e0194-kube-api-access-5592n\") pod \"placement-cd25-account-create-56jpk\" (UID: \"50d81cc5-1abb-4c0a-9b4c-e9d69b0e0194\") " pod="openstack/placement-cd25-account-create-56jpk" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.613538 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50d81cc5-1abb-4c0a-9b4c-e9d69b0e0194-operator-scripts\") pod \"placement-cd25-account-create-56jpk\" (UID: \"50d81cc5-1abb-4c0a-9b4c-e9d69b0e0194\") " pod="openstack/placement-cd25-account-create-56jpk" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.613616 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvzjx\" (UniqueName: \"kubernetes.io/projected/b6a46f49-9d70-4876-a8ba-070a44606a93-kube-api-access-dvzjx\") pod \"placement-db-create-jsdzm\" (UID: \"b6a46f49-9d70-4876-a8ba-070a44606a93\") " pod="openstack/placement-db-create-jsdzm" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.617059 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6a46f49-9d70-4876-a8ba-070a44606a93-operator-scripts\") pod \"placement-db-create-jsdzm\" (UID: \"b6a46f49-9d70-4876-a8ba-070a44606a93\") " pod="openstack/placement-db-create-jsdzm" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.618013 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6a46f49-9d70-4876-a8ba-070a44606a93-operator-scripts\") pod \"placement-db-create-jsdzm\" (UID: \"b6a46f49-9d70-4876-a8ba-070a44606a93\") " pod="openstack/placement-db-create-jsdzm" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.641603 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvzjx\" (UniqueName: \"kubernetes.io/projected/b6a46f49-9d70-4876-a8ba-070a44606a93-kube-api-access-dvzjx\") pod \"placement-db-create-jsdzm\" (UID: \"b6a46f49-9d70-4876-a8ba-070a44606a93\") " pod="openstack/placement-db-create-jsdzm" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.669060 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-wdn9d"] Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.670329 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-wdn9d" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.688580 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-wdn9d"] Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.718100 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91441784-0780-4721-bed1-4197f7f24cdb-operator-scripts\") pod \"glance-db-create-wdn9d\" (UID: \"91441784-0780-4721-bed1-4197f7f24cdb\") " pod="openstack/glance-db-create-wdn9d" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.718155 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxxnh\" (UniqueName: \"kubernetes.io/projected/91441784-0780-4721-bed1-4197f7f24cdb-kube-api-access-lxxnh\") pod \"glance-db-create-wdn9d\" (UID: \"91441784-0780-4721-bed1-4197f7f24cdb\") " pod="openstack/glance-db-create-wdn9d" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.718222 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5592n\" (UniqueName: \"kubernetes.io/projected/50d81cc5-1abb-4c0a-9b4c-e9d69b0e0194-kube-api-access-5592n\") pod \"placement-cd25-account-create-56jpk\" (UID: \"50d81cc5-1abb-4c0a-9b4c-e9d69b0e0194\") " pod="openstack/placement-cd25-account-create-56jpk" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.718242 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50d81cc5-1abb-4c0a-9b4c-e9d69b0e0194-operator-scripts\") pod \"placement-cd25-account-create-56jpk\" (UID: \"50d81cc5-1abb-4c0a-9b4c-e9d69b0e0194\") " pod="openstack/placement-cd25-account-create-56jpk" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.718880 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50d81cc5-1abb-4c0a-9b4c-e9d69b0e0194-operator-scripts\") pod \"placement-cd25-account-create-56jpk\" (UID: \"50d81cc5-1abb-4c0a-9b4c-e9d69b0e0194\") " pod="openstack/placement-cd25-account-create-56jpk" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.743257 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5592n\" (UniqueName: \"kubernetes.io/projected/50d81cc5-1abb-4c0a-9b4c-e9d69b0e0194-kube-api-access-5592n\") pod \"placement-cd25-account-create-56jpk\" (UID: \"50d81cc5-1abb-4c0a-9b4c-e9d69b0e0194\") " pod="openstack/placement-cd25-account-create-56jpk" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.819304 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91441784-0780-4721-bed1-4197f7f24cdb-operator-scripts\") pod \"glance-db-create-wdn9d\" (UID: \"91441784-0780-4721-bed1-4197f7f24cdb\") " pod="openstack/glance-db-create-wdn9d" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.819776 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxxnh\" (UniqueName: \"kubernetes.io/projected/91441784-0780-4721-bed1-4197f7f24cdb-kube-api-access-lxxnh\") pod \"glance-db-create-wdn9d\" (UID: \"91441784-0780-4721-bed1-4197f7f24cdb\") " pod="openstack/glance-db-create-wdn9d" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.820316 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-jsdzm" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.820682 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91441784-0780-4721-bed1-4197f7f24cdb-operator-scripts\") pod \"glance-db-create-wdn9d\" (UID: \"91441784-0780-4721-bed1-4197f7f24cdb\") " pod="openstack/glance-db-create-wdn9d" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.856671 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxxnh\" (UniqueName: \"kubernetes.io/projected/91441784-0780-4721-bed1-4197f7f24cdb-kube-api-access-lxxnh\") pod \"glance-db-create-wdn9d\" (UID: \"91441784-0780-4721-bed1-4197f7f24cdb\") " pod="openstack/glance-db-create-wdn9d" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.873995 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-ccb9-account-create-n9jzt"] Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.875025 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ccb9-account-create-n9jzt" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.894279 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.895718 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cd25-account-create-56jpk" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.922687 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd8a3a60-2e4e-461d-be45-3b2d8db511ba-operator-scripts\") pod \"glance-ccb9-account-create-n9jzt\" (UID: \"fd8a3a60-2e4e-461d-be45-3b2d8db511ba\") " pod="openstack/glance-ccb9-account-create-n9jzt" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.922761 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82zf5\" (UniqueName: \"kubernetes.io/projected/fd8a3a60-2e4e-461d-be45-3b2d8db511ba-kube-api-access-82zf5\") pod \"glance-ccb9-account-create-n9jzt\" (UID: \"fd8a3a60-2e4e-461d-be45-3b2d8db511ba\") " pod="openstack/glance-ccb9-account-create-n9jzt" Nov 24 11:45:41 crc kubenswrapper[4789]: I1124 11:45:41.929159 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ccb9-account-create-n9jzt"] Nov 24 11:45:42 crc kubenswrapper[4789]: I1124 11:45:42.019976 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-wdn9d" Nov 24 11:45:42 crc kubenswrapper[4789]: I1124 11:45:42.024882 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd8a3a60-2e4e-461d-be45-3b2d8db511ba-operator-scripts\") pod \"glance-ccb9-account-create-n9jzt\" (UID: \"fd8a3a60-2e4e-461d-be45-3b2d8db511ba\") " pod="openstack/glance-ccb9-account-create-n9jzt" Nov 24 11:45:42 crc kubenswrapper[4789]: I1124 11:45:42.024981 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82zf5\" (UniqueName: \"kubernetes.io/projected/fd8a3a60-2e4e-461d-be45-3b2d8db511ba-kube-api-access-82zf5\") pod \"glance-ccb9-account-create-n9jzt\" (UID: \"fd8a3a60-2e4e-461d-be45-3b2d8db511ba\") " pod="openstack/glance-ccb9-account-create-n9jzt" Nov 24 11:45:42 crc kubenswrapper[4789]: I1124 11:45:42.025931 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd8a3a60-2e4e-461d-be45-3b2d8db511ba-operator-scripts\") pod \"glance-ccb9-account-create-n9jzt\" (UID: \"fd8a3a60-2e4e-461d-be45-3b2d8db511ba\") " pod="openstack/glance-ccb9-account-create-n9jzt" Nov 24 11:45:42 crc kubenswrapper[4789]: I1124 11:45:42.050918 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82zf5\" (UniqueName: \"kubernetes.io/projected/fd8a3a60-2e4e-461d-be45-3b2d8db511ba-kube-api-access-82zf5\") pod \"glance-ccb9-account-create-n9jzt\" (UID: \"fd8a3a60-2e4e-461d-be45-3b2d8db511ba\") " pod="openstack/glance-ccb9-account-create-n9jzt" Nov 24 11:45:42 crc kubenswrapper[4789]: I1124 11:45:42.221779 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e52f-account-create-5n95s"] Nov 24 11:45:42 crc kubenswrapper[4789]: W1124 11:45:42.225962 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda18094e0_852b_4365_b8c8_a65185dc446e.slice/crio-86d72e21a46cd1eca462e526a326188431326f29b4e06aef6a716f62f1c17369 WatchSource:0}: Error finding container 86d72e21a46cd1eca462e526a326188431326f29b4e06aef6a716f62f1c17369: Status 404 returned error can't find the container with id 86d72e21a46cd1eca462e526a326188431326f29b4e06aef6a716f62f1c17369 Nov 24 11:45:42 crc kubenswrapper[4789]: I1124 11:45:42.261205 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ccb9-account-create-n9jzt" Nov 24 11:45:42 crc kubenswrapper[4789]: I1124 11:45:42.340742 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-zqv9q"] Nov 24 11:45:42 crc kubenswrapper[4789]: I1124 11:45:42.435690 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-jsdzm"] Nov 24 11:45:42 crc kubenswrapper[4789]: I1124 11:45:42.455853 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-cd25-account-create-56jpk"] Nov 24 11:45:42 crc kubenswrapper[4789]: W1124 11:45:42.463517 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50d81cc5_1abb_4c0a_9b4c_e9d69b0e0194.slice/crio-ad68302ded085d0c9175fb17988f8166c043984647d75468391fbe6cebd7b3b2 WatchSource:0}: Error finding container ad68302ded085d0c9175fb17988f8166c043984647d75468391fbe6cebd7b3b2: Status 404 returned error can't find the container with id ad68302ded085d0c9175fb17988f8166c043984647d75468391fbe6cebd7b3b2 Nov 24 11:45:42 crc kubenswrapper[4789]: I1124 11:45:42.557648 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-wdn9d"] Nov 24 11:45:42 crc kubenswrapper[4789]: I1124 11:45:42.712359 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ccb9-account-create-n9jzt"] Nov 24 11:45:43 crc kubenswrapper[4789]: I1124 11:45:43.147570 4789 generic.go:334] "Generic (PLEG): container finished" podID="b8e0bf0e-258d-41c3-af5b-86b1413d0d9b" containerID="bef8fa7f119f7d23791353ff5bcfb5af673f41f97b0bd4ba04f5812c33b04d80" exitCode=0 Nov 24 11:45:43 crc kubenswrapper[4789]: I1124 11:45:43.147698 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zqv9q" event={"ID":"b8e0bf0e-258d-41c3-af5b-86b1413d0d9b","Type":"ContainerDied","Data":"bef8fa7f119f7d23791353ff5bcfb5af673f41f97b0bd4ba04f5812c33b04d80"} Nov 24 11:45:43 crc kubenswrapper[4789]: I1124 11:45:43.147994 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zqv9q" event={"ID":"b8e0bf0e-258d-41c3-af5b-86b1413d0d9b","Type":"ContainerStarted","Data":"14efbf55d3469d99ace790bdb71a0c05f236a90617c6d821d94a1077f7522318"} Nov 24 11:45:43 crc kubenswrapper[4789]: I1124 11:45:43.149499 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ccb9-account-create-n9jzt" event={"ID":"fd8a3a60-2e4e-461d-be45-3b2d8db511ba","Type":"ContainerStarted","Data":"ba526d57ffe37ec8885d602f06d5139de1799881162498c5eda463bd5c268cf3"} Nov 24 11:45:43 crc kubenswrapper[4789]: I1124 11:45:43.149534 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ccb9-account-create-n9jzt" event={"ID":"fd8a3a60-2e4e-461d-be45-3b2d8db511ba","Type":"ContainerStarted","Data":"f78bf2aa0a0eeb6723ed25060c4ec20b25b33d12fa010c298b44a7f3a4e620c0"} Nov 24 11:45:43 crc kubenswrapper[4789]: I1124 11:45:43.157006 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-wdn9d" event={"ID":"91441784-0780-4721-bed1-4197f7f24cdb","Type":"ContainerStarted","Data":"8b75825b4b3a9a89bee133c3bff20e812c1ca7aad481a968c500d8ae4551fd0c"} Nov 24 11:45:43 crc kubenswrapper[4789]: I1124 11:45:43.157031 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-wdn9d" event={"ID":"91441784-0780-4721-bed1-4197f7f24cdb","Type":"ContainerStarted","Data":"bcf88b9bfb51e3392f921b687b02a24932a6564bbb57f093d445fa4538a92233"} Nov 24 11:45:43 crc kubenswrapper[4789]: I1124 11:45:43.158550 4789 generic.go:334] "Generic (PLEG): container finished" podID="b6a46f49-9d70-4876-a8ba-070a44606a93" containerID="6bf3515c7a28c2f7203a6efacf2f7955f88a0ce5274571d2f3224860370688a6" exitCode=0 Nov 24 11:45:43 crc kubenswrapper[4789]: I1124 11:45:43.158623 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-jsdzm" event={"ID":"b6a46f49-9d70-4876-a8ba-070a44606a93","Type":"ContainerDied","Data":"6bf3515c7a28c2f7203a6efacf2f7955f88a0ce5274571d2f3224860370688a6"} Nov 24 11:45:43 crc kubenswrapper[4789]: I1124 11:45:43.158653 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-jsdzm" event={"ID":"b6a46f49-9d70-4876-a8ba-070a44606a93","Type":"ContainerStarted","Data":"dcc78b4c53e94edca5c952b0ad866fe01e1f587ddbf4afe8f00f79e1511ed240"} Nov 24 11:45:43 crc kubenswrapper[4789]: I1124 11:45:43.161640 4789 generic.go:334] "Generic (PLEG): container finished" podID="50d81cc5-1abb-4c0a-9b4c-e9d69b0e0194" containerID="f2de067f8bccc7410e4b54afd320cb8c9e683c63b554abe973b4f0fc6423cf5b" exitCode=0 Nov 24 11:45:43 crc kubenswrapper[4789]: I1124 11:45:43.161728 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cd25-account-create-56jpk" event={"ID":"50d81cc5-1abb-4c0a-9b4c-e9d69b0e0194","Type":"ContainerDied","Data":"f2de067f8bccc7410e4b54afd320cb8c9e683c63b554abe973b4f0fc6423cf5b"} Nov 24 11:45:43 crc kubenswrapper[4789]: I1124 11:45:43.161756 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cd25-account-create-56jpk" event={"ID":"50d81cc5-1abb-4c0a-9b4c-e9d69b0e0194","Type":"ContainerStarted","Data":"ad68302ded085d0c9175fb17988f8166c043984647d75468391fbe6cebd7b3b2"} Nov 24 11:45:43 crc kubenswrapper[4789]: I1124 11:45:43.163254 4789 generic.go:334] "Generic (PLEG): container finished" podID="a18094e0-852b-4365-b8c8-a65185dc446e" containerID="90756f3378ab8fb4aebe76b64a9808107d10151514456dfe42cd80f1ee0e539d" exitCode=0 Nov 24 11:45:43 crc kubenswrapper[4789]: I1124 11:45:43.163281 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e52f-account-create-5n95s" event={"ID":"a18094e0-852b-4365-b8c8-a65185dc446e","Type":"ContainerDied","Data":"90756f3378ab8fb4aebe76b64a9808107d10151514456dfe42cd80f1ee0e539d"} Nov 24 11:45:43 crc kubenswrapper[4789]: I1124 11:45:43.163324 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e52f-account-create-5n95s" event={"ID":"a18094e0-852b-4365-b8c8-a65185dc446e","Type":"ContainerStarted","Data":"86d72e21a46cd1eca462e526a326188431326f29b4e06aef6a716f62f1c17369"} Nov 24 11:45:43 crc kubenswrapper[4789]: I1124 11:45:43.179410 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-ccb9-account-create-n9jzt" podStartSLOduration=2.179396635 podStartE2EDuration="2.179396635s" podCreationTimestamp="2025-11-24 11:45:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:45:43.176092406 +0000 UTC m=+925.758563785" watchObservedRunningTime="2025-11-24 11:45:43.179396635 +0000 UTC m=+925.761868014" Nov 24 11:45:43 crc kubenswrapper[4789]: I1124 11:45:43.249137 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-wdn9d" podStartSLOduration=2.249121874 podStartE2EDuration="2.249121874s" podCreationTimestamp="2025-11-24 11:45:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:45:43.24315882 +0000 UTC m=+925.825630199" watchObservedRunningTime="2025-11-24 11:45:43.249121874 +0000 UTC m=+925.831593253" Nov 24 11:45:44 crc kubenswrapper[4789]: I1124 11:45:44.174212 4789 generic.go:334] "Generic (PLEG): container finished" podID="91441784-0780-4721-bed1-4197f7f24cdb" containerID="8b75825b4b3a9a89bee133c3bff20e812c1ca7aad481a968c500d8ae4551fd0c" exitCode=0 Nov 24 11:45:44 crc kubenswrapper[4789]: I1124 11:45:44.181103 4789 generic.go:334] "Generic (PLEG): container finished" podID="fd8a3a60-2e4e-461d-be45-3b2d8db511ba" containerID="ba526d57ffe37ec8885d602f06d5139de1799881162498c5eda463bd5c268cf3" exitCode=0 Nov 24 11:45:44 crc kubenswrapper[4789]: I1124 11:45:44.184050 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-wdn9d" event={"ID":"91441784-0780-4721-bed1-4197f7f24cdb","Type":"ContainerDied","Data":"8b75825b4b3a9a89bee133c3bff20e812c1ca7aad481a968c500d8ae4551fd0c"} Nov 24 11:45:44 crc kubenswrapper[4789]: I1124 11:45:44.184085 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ccb9-account-create-n9jzt" event={"ID":"fd8a3a60-2e4e-461d-be45-3b2d8db511ba","Type":"ContainerDied","Data":"ba526d57ffe37ec8885d602f06d5139de1799881162498c5eda463bd5c268cf3"} Nov 24 11:45:44 crc kubenswrapper[4789]: I1124 11:45:44.624412 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e52f-account-create-5n95s" Nov 24 11:45:44 crc kubenswrapper[4789]: I1124 11:45:44.672765 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dzhg\" (UniqueName: \"kubernetes.io/projected/a18094e0-852b-4365-b8c8-a65185dc446e-kube-api-access-2dzhg\") pod \"a18094e0-852b-4365-b8c8-a65185dc446e\" (UID: \"a18094e0-852b-4365-b8c8-a65185dc446e\") " Nov 24 11:45:44 crc kubenswrapper[4789]: I1124 11:45:44.672860 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a18094e0-852b-4365-b8c8-a65185dc446e-operator-scripts\") pod \"a18094e0-852b-4365-b8c8-a65185dc446e\" (UID: \"a18094e0-852b-4365-b8c8-a65185dc446e\") " Nov 24 11:45:44 crc kubenswrapper[4789]: I1124 11:45:44.674095 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a18094e0-852b-4365-b8c8-a65185dc446e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a18094e0-852b-4365-b8c8-a65185dc446e" (UID: "a18094e0-852b-4365-b8c8-a65185dc446e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:45:44 crc kubenswrapper[4789]: I1124 11:45:44.698483 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a18094e0-852b-4365-b8c8-a65185dc446e-kube-api-access-2dzhg" (OuterVolumeSpecName: "kube-api-access-2dzhg") pod "a18094e0-852b-4365-b8c8-a65185dc446e" (UID: "a18094e0-852b-4365-b8c8-a65185dc446e"). InnerVolumeSpecName "kube-api-access-2dzhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:45:44 crc kubenswrapper[4789]: I1124 11:45:44.775713 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a18094e0-852b-4365-b8c8-a65185dc446e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:44 crc kubenswrapper[4789]: I1124 11:45:44.775746 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dzhg\" (UniqueName: \"kubernetes.io/projected/a18094e0-852b-4365-b8c8-a65185dc446e-kube-api-access-2dzhg\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:44 crc kubenswrapper[4789]: I1124 11:45:44.802394 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cd25-account-create-56jpk" Nov 24 11:45:44 crc kubenswrapper[4789]: I1124 11:45:44.806927 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-jsdzm" Nov 24 11:45:44 crc kubenswrapper[4789]: I1124 11:45:44.823391 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zqv9q" Nov 24 11:45:44 crc kubenswrapper[4789]: I1124 11:45:44.876912 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvzjx\" (UniqueName: \"kubernetes.io/projected/b6a46f49-9d70-4876-a8ba-070a44606a93-kube-api-access-dvzjx\") pod \"b6a46f49-9d70-4876-a8ba-070a44606a93\" (UID: \"b6a46f49-9d70-4876-a8ba-070a44606a93\") " Nov 24 11:45:44 crc kubenswrapper[4789]: I1124 11:45:44.877008 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8e0bf0e-258d-41c3-af5b-86b1413d0d9b-operator-scripts\") pod \"b8e0bf0e-258d-41c3-af5b-86b1413d0d9b\" (UID: \"b8e0bf0e-258d-41c3-af5b-86b1413d0d9b\") " Nov 24 11:45:44 crc kubenswrapper[4789]: I1124 11:45:44.877026 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50d81cc5-1abb-4c0a-9b4c-e9d69b0e0194-operator-scripts\") pod \"50d81cc5-1abb-4c0a-9b4c-e9d69b0e0194\" (UID: \"50d81cc5-1abb-4c0a-9b4c-e9d69b0e0194\") " Nov 24 11:45:44 crc kubenswrapper[4789]: I1124 11:45:44.877069 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lh6rc\" (UniqueName: \"kubernetes.io/projected/b8e0bf0e-258d-41c3-af5b-86b1413d0d9b-kube-api-access-lh6rc\") pod \"b8e0bf0e-258d-41c3-af5b-86b1413d0d9b\" (UID: \"b8e0bf0e-258d-41c3-af5b-86b1413d0d9b\") " Nov 24 11:45:44 crc kubenswrapper[4789]: I1124 11:45:44.877158 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5592n\" (UniqueName: \"kubernetes.io/projected/50d81cc5-1abb-4c0a-9b4c-e9d69b0e0194-kube-api-access-5592n\") pod \"50d81cc5-1abb-4c0a-9b4c-e9d69b0e0194\" (UID: \"50d81cc5-1abb-4c0a-9b4c-e9d69b0e0194\") " Nov 24 11:45:44 crc kubenswrapper[4789]: I1124 11:45:44.877208 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6a46f49-9d70-4876-a8ba-070a44606a93-operator-scripts\") pod \"b6a46f49-9d70-4876-a8ba-070a44606a93\" (UID: \"b6a46f49-9d70-4876-a8ba-070a44606a93\") " Nov 24 11:45:44 crc kubenswrapper[4789]: I1124 11:45:44.877894 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6a46f49-9d70-4876-a8ba-070a44606a93-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b6a46f49-9d70-4876-a8ba-070a44606a93" (UID: "b6a46f49-9d70-4876-a8ba-070a44606a93"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:45:44 crc kubenswrapper[4789]: I1124 11:45:44.877926 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50d81cc5-1abb-4c0a-9b4c-e9d69b0e0194-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "50d81cc5-1abb-4c0a-9b4c-e9d69b0e0194" (UID: "50d81cc5-1abb-4c0a-9b4c-e9d69b0e0194"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:45:44 crc kubenswrapper[4789]: I1124 11:45:44.878279 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8e0bf0e-258d-41c3-af5b-86b1413d0d9b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b8e0bf0e-258d-41c3-af5b-86b1413d0d9b" (UID: "b8e0bf0e-258d-41c3-af5b-86b1413d0d9b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:45:44 crc kubenswrapper[4789]: I1124 11:45:44.881238 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6a46f49-9d70-4876-a8ba-070a44606a93-kube-api-access-dvzjx" (OuterVolumeSpecName: "kube-api-access-dvzjx") pod "b6a46f49-9d70-4876-a8ba-070a44606a93" (UID: "b6a46f49-9d70-4876-a8ba-070a44606a93"). InnerVolumeSpecName "kube-api-access-dvzjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:45:44 crc kubenswrapper[4789]: I1124 11:45:44.881272 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8e0bf0e-258d-41c3-af5b-86b1413d0d9b-kube-api-access-lh6rc" (OuterVolumeSpecName: "kube-api-access-lh6rc") pod "b8e0bf0e-258d-41c3-af5b-86b1413d0d9b" (UID: "b8e0bf0e-258d-41c3-af5b-86b1413d0d9b"). InnerVolumeSpecName "kube-api-access-lh6rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:45:44 crc kubenswrapper[4789]: I1124 11:45:44.882581 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50d81cc5-1abb-4c0a-9b4c-e9d69b0e0194-kube-api-access-5592n" (OuterVolumeSpecName: "kube-api-access-5592n") pod "50d81cc5-1abb-4c0a-9b4c-e9d69b0e0194" (UID: "50d81cc5-1abb-4c0a-9b4c-e9d69b0e0194"). InnerVolumeSpecName "kube-api-access-5592n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:45:44 crc kubenswrapper[4789]: I1124 11:45:44.978908 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5592n\" (UniqueName: \"kubernetes.io/projected/50d81cc5-1abb-4c0a-9b4c-e9d69b0e0194-kube-api-access-5592n\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:44 crc kubenswrapper[4789]: I1124 11:45:44.978942 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6a46f49-9d70-4876-a8ba-070a44606a93-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:44 crc kubenswrapper[4789]: I1124 11:45:44.978951 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvzjx\" (UniqueName: \"kubernetes.io/projected/b6a46f49-9d70-4876-a8ba-070a44606a93-kube-api-access-dvzjx\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:44 crc kubenswrapper[4789]: I1124 11:45:44.978960 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8e0bf0e-258d-41c3-af5b-86b1413d0d9b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:44 crc kubenswrapper[4789]: I1124 11:45:44.978968 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50d81cc5-1abb-4c0a-9b4c-e9d69b0e0194-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:44 crc kubenswrapper[4789]: I1124 11:45:44.978999 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lh6rc\" (UniqueName: \"kubernetes.io/projected/b8e0bf0e-258d-41c3-af5b-86b1413d0d9b-kube-api-access-lh6rc\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:45 crc kubenswrapper[4789]: I1124 11:45:45.191737 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-jsdzm" Nov 24 11:45:45 crc kubenswrapper[4789]: I1124 11:45:45.191735 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-jsdzm" event={"ID":"b6a46f49-9d70-4876-a8ba-070a44606a93","Type":"ContainerDied","Data":"dcc78b4c53e94edca5c952b0ad866fe01e1f587ddbf4afe8f00f79e1511ed240"} Nov 24 11:45:45 crc kubenswrapper[4789]: I1124 11:45:45.191853 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcc78b4c53e94edca5c952b0ad866fe01e1f587ddbf4afe8f00f79e1511ed240" Nov 24 11:45:45 crc kubenswrapper[4789]: I1124 11:45:45.203018 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cd25-account-create-56jpk" Nov 24 11:45:45 crc kubenswrapper[4789]: I1124 11:45:45.203034 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cd25-account-create-56jpk" event={"ID":"50d81cc5-1abb-4c0a-9b4c-e9d69b0e0194","Type":"ContainerDied","Data":"ad68302ded085d0c9175fb17988f8166c043984647d75468391fbe6cebd7b3b2"} Nov 24 11:45:45 crc kubenswrapper[4789]: I1124 11:45:45.203068 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad68302ded085d0c9175fb17988f8166c043984647d75468391fbe6cebd7b3b2" Nov 24 11:45:45 crc kubenswrapper[4789]: I1124 11:45:45.205323 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e52f-account-create-5n95s" event={"ID":"a18094e0-852b-4365-b8c8-a65185dc446e","Type":"ContainerDied","Data":"86d72e21a46cd1eca462e526a326188431326f29b4e06aef6a716f62f1c17369"} Nov 24 11:45:45 crc kubenswrapper[4789]: I1124 11:45:45.205340 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e52f-account-create-5n95s" Nov 24 11:45:45 crc kubenswrapper[4789]: I1124 11:45:45.205347 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86d72e21a46cd1eca462e526a326188431326f29b4e06aef6a716f62f1c17369" Nov 24 11:45:45 crc kubenswrapper[4789]: I1124 11:45:45.211906 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zqv9q" Nov 24 11:45:45 crc kubenswrapper[4789]: I1124 11:45:45.212220 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zqv9q" event={"ID":"b8e0bf0e-258d-41c3-af5b-86b1413d0d9b","Type":"ContainerDied","Data":"14efbf55d3469d99ace790bdb71a0c05f236a90617c6d821d94a1077f7522318"} Nov 24 11:45:45 crc kubenswrapper[4789]: I1124 11:45:45.212259 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14efbf55d3469d99ace790bdb71a0c05f236a90617c6d821d94a1077f7522318" Nov 24 11:45:45 crc kubenswrapper[4789]: I1124 11:45:45.518049 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ccb9-account-create-n9jzt" Nov 24 11:45:45 crc kubenswrapper[4789]: I1124 11:45:45.578716 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-wdn9d" Nov 24 11:45:45 crc kubenswrapper[4789]: I1124 11:45:45.590559 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82zf5\" (UniqueName: \"kubernetes.io/projected/fd8a3a60-2e4e-461d-be45-3b2d8db511ba-kube-api-access-82zf5\") pod \"fd8a3a60-2e4e-461d-be45-3b2d8db511ba\" (UID: \"fd8a3a60-2e4e-461d-be45-3b2d8db511ba\") " Nov 24 11:45:45 crc kubenswrapper[4789]: I1124 11:45:45.590592 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxxnh\" (UniqueName: \"kubernetes.io/projected/91441784-0780-4721-bed1-4197f7f24cdb-kube-api-access-lxxnh\") pod \"91441784-0780-4721-bed1-4197f7f24cdb\" (UID: \"91441784-0780-4721-bed1-4197f7f24cdb\") " Nov 24 11:45:45 crc kubenswrapper[4789]: I1124 11:45:45.590654 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91441784-0780-4721-bed1-4197f7f24cdb-operator-scripts\") pod \"91441784-0780-4721-bed1-4197f7f24cdb\" (UID: \"91441784-0780-4721-bed1-4197f7f24cdb\") " Nov 24 11:45:45 crc kubenswrapper[4789]: I1124 11:45:45.590712 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd8a3a60-2e4e-461d-be45-3b2d8db511ba-operator-scripts\") pod \"fd8a3a60-2e4e-461d-be45-3b2d8db511ba\" (UID: \"fd8a3a60-2e4e-461d-be45-3b2d8db511ba\") " Nov 24 11:45:45 crc kubenswrapper[4789]: I1124 11:45:45.591744 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91441784-0780-4721-bed1-4197f7f24cdb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "91441784-0780-4721-bed1-4197f7f24cdb" (UID: "91441784-0780-4721-bed1-4197f7f24cdb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:45:45 crc kubenswrapper[4789]: I1124 11:45:45.593606 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd8a3a60-2e4e-461d-be45-3b2d8db511ba-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fd8a3a60-2e4e-461d-be45-3b2d8db511ba" (UID: "fd8a3a60-2e4e-461d-be45-3b2d8db511ba"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:45:45 crc kubenswrapper[4789]: I1124 11:45:45.597436 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91441784-0780-4721-bed1-4197f7f24cdb-kube-api-access-lxxnh" (OuterVolumeSpecName: "kube-api-access-lxxnh") pod "91441784-0780-4721-bed1-4197f7f24cdb" (UID: "91441784-0780-4721-bed1-4197f7f24cdb"). InnerVolumeSpecName "kube-api-access-lxxnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:45:45 crc kubenswrapper[4789]: I1124 11:45:45.599859 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd8a3a60-2e4e-461d-be45-3b2d8db511ba-kube-api-access-82zf5" (OuterVolumeSpecName: "kube-api-access-82zf5") pod "fd8a3a60-2e4e-461d-be45-3b2d8db511ba" (UID: "fd8a3a60-2e4e-461d-be45-3b2d8db511ba"). InnerVolumeSpecName "kube-api-access-82zf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:45:45 crc kubenswrapper[4789]: I1124 11:45:45.692367 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd8a3a60-2e4e-461d-be45-3b2d8db511ba-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:45 crc kubenswrapper[4789]: I1124 11:45:45.692406 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82zf5\" (UniqueName: \"kubernetes.io/projected/fd8a3a60-2e4e-461d-be45-3b2d8db511ba-kube-api-access-82zf5\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:45 crc kubenswrapper[4789]: I1124 11:45:45.692425 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxxnh\" (UniqueName: \"kubernetes.io/projected/91441784-0780-4721-bed1-4197f7f24cdb-kube-api-access-lxxnh\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:45 crc kubenswrapper[4789]: I1124 11:45:45.692434 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91441784-0780-4721-bed1-4197f7f24cdb-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:46 crc kubenswrapper[4789]: I1124 11:45:46.223081 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ccb9-account-create-n9jzt" event={"ID":"fd8a3a60-2e4e-461d-be45-3b2d8db511ba","Type":"ContainerDied","Data":"f78bf2aa0a0eeb6723ed25060c4ec20b25b33d12fa010c298b44a7f3a4e620c0"} Nov 24 11:45:46 crc kubenswrapper[4789]: I1124 11:45:46.223504 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f78bf2aa0a0eeb6723ed25060c4ec20b25b33d12fa010c298b44a7f3a4e620c0" Nov 24 11:45:46 crc kubenswrapper[4789]: I1124 11:45:46.223722 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ccb9-account-create-n9jzt" Nov 24 11:45:46 crc kubenswrapper[4789]: I1124 11:45:46.227982 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-wdn9d" event={"ID":"91441784-0780-4721-bed1-4197f7f24cdb","Type":"ContainerDied","Data":"bcf88b9bfb51e3392f921b687b02a24932a6564bbb57f093d445fa4538a92233"} Nov 24 11:45:46 crc kubenswrapper[4789]: I1124 11:45:46.228219 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcf88b9bfb51e3392f921b687b02a24932a6564bbb57f093d445fa4538a92233" Nov 24 11:45:46 crc kubenswrapper[4789]: I1124 11:45:46.228113 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-wdn9d" Nov 24 11:45:47 crc kubenswrapper[4789]: I1124 11:45:47.004971 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-ncww2"] Nov 24 11:45:47 crc kubenswrapper[4789]: E1124 11:45:47.005286 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a18094e0-852b-4365-b8c8-a65185dc446e" containerName="mariadb-account-create" Nov 24 11:45:47 crc kubenswrapper[4789]: I1124 11:45:47.005300 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="a18094e0-852b-4365-b8c8-a65185dc446e" containerName="mariadb-account-create" Nov 24 11:45:47 crc kubenswrapper[4789]: E1124 11:45:47.005313 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6a46f49-9d70-4876-a8ba-070a44606a93" containerName="mariadb-database-create" Nov 24 11:45:47 crc kubenswrapper[4789]: I1124 11:45:47.005322 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6a46f49-9d70-4876-a8ba-070a44606a93" containerName="mariadb-database-create" Nov 24 11:45:47 crc kubenswrapper[4789]: E1124 11:45:47.005331 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50d81cc5-1abb-4c0a-9b4c-e9d69b0e0194" containerName="mariadb-account-create" Nov 24 11:45:47 crc kubenswrapper[4789]: I1124 11:45:47.005337 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="50d81cc5-1abb-4c0a-9b4c-e9d69b0e0194" containerName="mariadb-account-create" Nov 24 11:45:47 crc kubenswrapper[4789]: E1124 11:45:47.005352 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91441784-0780-4721-bed1-4197f7f24cdb" containerName="mariadb-database-create" Nov 24 11:45:47 crc kubenswrapper[4789]: I1124 11:45:47.005358 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="91441784-0780-4721-bed1-4197f7f24cdb" containerName="mariadb-database-create" Nov 24 11:45:47 crc kubenswrapper[4789]: E1124 11:45:47.005370 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd8a3a60-2e4e-461d-be45-3b2d8db511ba" containerName="mariadb-account-create" Nov 24 11:45:47 crc kubenswrapper[4789]: I1124 11:45:47.005376 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd8a3a60-2e4e-461d-be45-3b2d8db511ba" containerName="mariadb-account-create" Nov 24 11:45:47 crc kubenswrapper[4789]: E1124 11:45:47.005405 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8e0bf0e-258d-41c3-af5b-86b1413d0d9b" containerName="mariadb-database-create" Nov 24 11:45:47 crc kubenswrapper[4789]: I1124 11:45:47.005413 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8e0bf0e-258d-41c3-af5b-86b1413d0d9b" containerName="mariadb-database-create" Nov 24 11:45:47 crc kubenswrapper[4789]: I1124 11:45:47.005589 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="91441784-0780-4721-bed1-4197f7f24cdb" containerName="mariadb-database-create" Nov 24 11:45:47 crc kubenswrapper[4789]: I1124 11:45:47.005607 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8e0bf0e-258d-41c3-af5b-86b1413d0d9b" containerName="mariadb-database-create" Nov 24 11:45:47 crc kubenswrapper[4789]: I1124 11:45:47.005616 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6a46f49-9d70-4876-a8ba-070a44606a93" containerName="mariadb-database-create" Nov 24 11:45:47 crc kubenswrapper[4789]: I1124 11:45:47.005625 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="50d81cc5-1abb-4c0a-9b4c-e9d69b0e0194" containerName="mariadb-account-create" Nov 24 11:45:47 crc kubenswrapper[4789]: I1124 11:45:47.005634 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="a18094e0-852b-4365-b8c8-a65185dc446e" containerName="mariadb-account-create" Nov 24 11:45:47 crc kubenswrapper[4789]: I1124 11:45:47.005642 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd8a3a60-2e4e-461d-be45-3b2d8db511ba" containerName="mariadb-account-create" Nov 24 11:45:47 crc kubenswrapper[4789]: I1124 11:45:47.006163 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ncww2" Nov 24 11:45:47 crc kubenswrapper[4789]: I1124 11:45:47.009013 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 24 11:45:47 crc kubenswrapper[4789]: I1124 11:45:47.010208 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-d47kw" Nov 24 11:45:47 crc kubenswrapper[4789]: I1124 11:45:47.022708 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-ncww2"] Nov 24 11:45:47 crc kubenswrapper[4789]: I1124 11:45:47.115732 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ct8p\" (UniqueName: \"kubernetes.io/projected/62d7feaf-71e2-4d0e-b0b9-2f61eb421522-kube-api-access-7ct8p\") pod \"glance-db-sync-ncww2\" (UID: \"62d7feaf-71e2-4d0e-b0b9-2f61eb421522\") " pod="openstack/glance-db-sync-ncww2" Nov 24 11:45:47 crc kubenswrapper[4789]: I1124 11:45:47.116125 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62d7feaf-71e2-4d0e-b0b9-2f61eb421522-combined-ca-bundle\") pod \"glance-db-sync-ncww2\" (UID: \"62d7feaf-71e2-4d0e-b0b9-2f61eb421522\") " pod="openstack/glance-db-sync-ncww2" Nov 24 11:45:47 crc kubenswrapper[4789]: I1124 11:45:47.116322 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62d7feaf-71e2-4d0e-b0b9-2f61eb421522-config-data\") pod \"glance-db-sync-ncww2\" (UID: \"62d7feaf-71e2-4d0e-b0b9-2f61eb421522\") " pod="openstack/glance-db-sync-ncww2" Nov 24 11:45:47 crc kubenswrapper[4789]: I1124 11:45:47.116546 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/62d7feaf-71e2-4d0e-b0b9-2f61eb421522-db-sync-config-data\") pod \"glance-db-sync-ncww2\" (UID: \"62d7feaf-71e2-4d0e-b0b9-2f61eb421522\") " pod="openstack/glance-db-sync-ncww2" Nov 24 11:45:47 crc kubenswrapper[4789]: I1124 11:45:47.219656 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ct8p\" (UniqueName: \"kubernetes.io/projected/62d7feaf-71e2-4d0e-b0b9-2f61eb421522-kube-api-access-7ct8p\") pod \"glance-db-sync-ncww2\" (UID: \"62d7feaf-71e2-4d0e-b0b9-2f61eb421522\") " pod="openstack/glance-db-sync-ncww2" Nov 24 11:45:47 crc kubenswrapper[4789]: I1124 11:45:47.219723 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62d7feaf-71e2-4d0e-b0b9-2f61eb421522-combined-ca-bundle\") pod \"glance-db-sync-ncww2\" (UID: \"62d7feaf-71e2-4d0e-b0b9-2f61eb421522\") " pod="openstack/glance-db-sync-ncww2" Nov 24 11:45:47 crc kubenswrapper[4789]: I1124 11:45:47.219786 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62d7feaf-71e2-4d0e-b0b9-2f61eb421522-config-data\") pod \"glance-db-sync-ncww2\" (UID: \"62d7feaf-71e2-4d0e-b0b9-2f61eb421522\") " pod="openstack/glance-db-sync-ncww2" Nov 24 11:45:47 crc kubenswrapper[4789]: I1124 11:45:47.219825 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/62d7feaf-71e2-4d0e-b0b9-2f61eb421522-db-sync-config-data\") pod \"glance-db-sync-ncww2\" (UID: \"62d7feaf-71e2-4d0e-b0b9-2f61eb421522\") " pod="openstack/glance-db-sync-ncww2" Nov 24 11:45:47 crc kubenswrapper[4789]: I1124 11:45:47.226419 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/62d7feaf-71e2-4d0e-b0b9-2f61eb421522-db-sync-config-data\") pod \"glance-db-sync-ncww2\" (UID: \"62d7feaf-71e2-4d0e-b0b9-2f61eb421522\") " pod="openstack/glance-db-sync-ncww2" Nov 24 11:45:47 crc kubenswrapper[4789]: I1124 11:45:47.227120 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62d7feaf-71e2-4d0e-b0b9-2f61eb421522-config-data\") pod \"glance-db-sync-ncww2\" (UID: \"62d7feaf-71e2-4d0e-b0b9-2f61eb421522\") " pod="openstack/glance-db-sync-ncww2" Nov 24 11:45:47 crc kubenswrapper[4789]: I1124 11:45:47.228249 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62d7feaf-71e2-4d0e-b0b9-2f61eb421522-combined-ca-bundle\") pod \"glance-db-sync-ncww2\" (UID: \"62d7feaf-71e2-4d0e-b0b9-2f61eb421522\") " pod="openstack/glance-db-sync-ncww2" Nov 24 11:45:47 crc kubenswrapper[4789]: I1124 11:45:47.245972 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ct8p\" (UniqueName: \"kubernetes.io/projected/62d7feaf-71e2-4d0e-b0b9-2f61eb421522-kube-api-access-7ct8p\") pod \"glance-db-sync-ncww2\" (UID: \"62d7feaf-71e2-4d0e-b0b9-2f61eb421522\") " pod="openstack/glance-db-sync-ncww2" Nov 24 11:45:47 crc kubenswrapper[4789]: I1124 11:45:47.328797 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ncww2" Nov 24 11:45:47 crc kubenswrapper[4789]: I1124 11:45:47.946157 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-ncww2"] Nov 24 11:45:47 crc kubenswrapper[4789]: W1124 11:45:47.954920 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62d7feaf_71e2_4d0e_b0b9_2f61eb421522.slice/crio-2cdc6b43fafd4cceff57334149a20f58e1b1101afc4ebbf0c5263dc078fe31a2 WatchSource:0}: Error finding container 2cdc6b43fafd4cceff57334149a20f58e1b1101afc4ebbf0c5263dc078fe31a2: Status 404 returned error can't find the container with id 2cdc6b43fafd4cceff57334149a20f58e1b1101afc4ebbf0c5263dc078fe31a2 Nov 24 11:45:48 crc kubenswrapper[4789]: I1124 11:45:48.258490 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ncww2" event={"ID":"62d7feaf-71e2-4d0e-b0b9-2f61eb421522","Type":"ContainerStarted","Data":"2cdc6b43fafd4cceff57334149a20f58e1b1101afc4ebbf0c5263dc078fe31a2"} Nov 24 11:45:49 crc kubenswrapper[4789]: I1124 11:45:49.726607 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 24 11:45:50 crc kubenswrapper[4789]: I1124 11:45:50.162635 4789 patch_prober.go:28] interesting pod/machine-config-daemon-9czvn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:45:50 crc kubenswrapper[4789]: I1124 11:45:50.162981 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:45:50 crc kubenswrapper[4789]: I1124 11:45:50.163042 4789 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" Nov 24 11:45:50 crc kubenswrapper[4789]: I1124 11:45:50.165993 4789 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4aecda2250b38282b436cf65055990a602ab1ffc6d48744037d9fd3637b96bdb"} pod="openshift-machine-config-operator/machine-config-daemon-9czvn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 11:45:50 crc kubenswrapper[4789]: I1124 11:45:50.166059 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" containerID="cri-o://4aecda2250b38282b436cf65055990a602ab1ffc6d48744037d9fd3637b96bdb" gracePeriod=600 Nov 24 11:45:51 crc kubenswrapper[4789]: I1124 11:45:51.282021 4789 generic.go:334] "Generic (PLEG): container finished" podID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerID="4aecda2250b38282b436cf65055990a602ab1ffc6d48744037d9fd3637b96bdb" exitCode=0 Nov 24 11:45:51 crc kubenswrapper[4789]: I1124 11:45:51.282361 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" event={"ID":"30c4a832-f0e4-481b-a474-3ecea86049f6","Type":"ContainerDied","Data":"4aecda2250b38282b436cf65055990a602ab1ffc6d48744037d9fd3637b96bdb"} Nov 24 11:45:51 crc kubenswrapper[4789]: I1124 11:45:51.282390 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" event={"ID":"30c4a832-f0e4-481b-a474-3ecea86049f6","Type":"ContainerStarted","Data":"f3cea7aef07d9136d7cecc4814ad70b6e4b4a4c56940366aabbc6b2f1bc56ebf"} Nov 24 11:45:51 crc kubenswrapper[4789]: I1124 11:45:51.282409 4789 scope.go:117] "RemoveContainer" containerID="8e60897d5da5e8d43be26df5c1cea722069e382de7019ee5de88fc244959bfbd" Nov 24 11:45:53 crc kubenswrapper[4789]: I1124 11:45:53.440555 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 24 11:45:57 crc kubenswrapper[4789]: I1124 11:45:57.363234 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-zh2n4" podUID="c77484cd-66ed-4471-9136-5e44eadd28ad" containerName="ovn-controller" probeResult="failure" output=< Nov 24 11:45:57 crc kubenswrapper[4789]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 24 11:45:57 crc kubenswrapper[4789]: > Nov 24 11:45:57 crc kubenswrapper[4789]: I1124 11:45:57.417132 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-4tbr6" Nov 24 11:45:57 crc kubenswrapper[4789]: I1124 11:45:57.441964 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-4tbr6" Nov 24 11:45:57 crc kubenswrapper[4789]: I1124 11:45:57.633789 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-zh2n4-config-gt7vq"] Nov 24 11:45:57 crc kubenswrapper[4789]: I1124 11:45:57.634788 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zh2n4-config-gt7vq" Nov 24 11:45:57 crc kubenswrapper[4789]: I1124 11:45:57.640937 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 24 11:45:57 crc kubenswrapper[4789]: I1124 11:45:57.648776 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zh2n4-config-gt7vq"] Nov 24 11:45:57 crc kubenswrapper[4789]: I1124 11:45:57.721019 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/150bf756-4c2b-4187-8e7c-c323de6d413e-var-log-ovn\") pod \"ovn-controller-zh2n4-config-gt7vq\" (UID: \"150bf756-4c2b-4187-8e7c-c323de6d413e\") " pod="openstack/ovn-controller-zh2n4-config-gt7vq" Nov 24 11:45:57 crc kubenswrapper[4789]: I1124 11:45:57.721162 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xqp2\" (UniqueName: \"kubernetes.io/projected/150bf756-4c2b-4187-8e7c-c323de6d413e-kube-api-access-2xqp2\") pod \"ovn-controller-zh2n4-config-gt7vq\" (UID: \"150bf756-4c2b-4187-8e7c-c323de6d413e\") " pod="openstack/ovn-controller-zh2n4-config-gt7vq" Nov 24 11:45:57 crc kubenswrapper[4789]: I1124 11:45:57.721195 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/150bf756-4c2b-4187-8e7c-c323de6d413e-additional-scripts\") pod \"ovn-controller-zh2n4-config-gt7vq\" (UID: \"150bf756-4c2b-4187-8e7c-c323de6d413e\") " pod="openstack/ovn-controller-zh2n4-config-gt7vq" Nov 24 11:45:57 crc kubenswrapper[4789]: I1124 11:45:57.721236 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/150bf756-4c2b-4187-8e7c-c323de6d413e-scripts\") pod \"ovn-controller-zh2n4-config-gt7vq\" (UID: \"150bf756-4c2b-4187-8e7c-c323de6d413e\") " pod="openstack/ovn-controller-zh2n4-config-gt7vq" Nov 24 11:45:57 crc kubenswrapper[4789]: I1124 11:45:57.721273 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/150bf756-4c2b-4187-8e7c-c323de6d413e-var-run-ovn\") pod \"ovn-controller-zh2n4-config-gt7vq\" (UID: \"150bf756-4c2b-4187-8e7c-c323de6d413e\") " pod="openstack/ovn-controller-zh2n4-config-gt7vq" Nov 24 11:45:57 crc kubenswrapper[4789]: I1124 11:45:57.721305 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/150bf756-4c2b-4187-8e7c-c323de6d413e-var-run\") pod \"ovn-controller-zh2n4-config-gt7vq\" (UID: \"150bf756-4c2b-4187-8e7c-c323de6d413e\") " pod="openstack/ovn-controller-zh2n4-config-gt7vq" Nov 24 11:45:57 crc kubenswrapper[4789]: I1124 11:45:57.822964 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/150bf756-4c2b-4187-8e7c-c323de6d413e-var-log-ovn\") pod \"ovn-controller-zh2n4-config-gt7vq\" (UID: \"150bf756-4c2b-4187-8e7c-c323de6d413e\") " pod="openstack/ovn-controller-zh2n4-config-gt7vq" Nov 24 11:45:57 crc kubenswrapper[4789]: I1124 11:45:57.823068 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xqp2\" (UniqueName: \"kubernetes.io/projected/150bf756-4c2b-4187-8e7c-c323de6d413e-kube-api-access-2xqp2\") pod \"ovn-controller-zh2n4-config-gt7vq\" (UID: \"150bf756-4c2b-4187-8e7c-c323de6d413e\") " pod="openstack/ovn-controller-zh2n4-config-gt7vq" Nov 24 11:45:57 crc kubenswrapper[4789]: I1124 11:45:57.823104 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/150bf756-4c2b-4187-8e7c-c323de6d413e-additional-scripts\") pod \"ovn-controller-zh2n4-config-gt7vq\" (UID: \"150bf756-4c2b-4187-8e7c-c323de6d413e\") " pod="openstack/ovn-controller-zh2n4-config-gt7vq" Nov 24 11:45:57 crc kubenswrapper[4789]: I1124 11:45:57.823152 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/150bf756-4c2b-4187-8e7c-c323de6d413e-scripts\") pod \"ovn-controller-zh2n4-config-gt7vq\" (UID: \"150bf756-4c2b-4187-8e7c-c323de6d413e\") " pod="openstack/ovn-controller-zh2n4-config-gt7vq" Nov 24 11:45:57 crc kubenswrapper[4789]: I1124 11:45:57.823186 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/150bf756-4c2b-4187-8e7c-c323de6d413e-var-run-ovn\") pod \"ovn-controller-zh2n4-config-gt7vq\" (UID: \"150bf756-4c2b-4187-8e7c-c323de6d413e\") " pod="openstack/ovn-controller-zh2n4-config-gt7vq" Nov 24 11:45:57 crc kubenswrapper[4789]: I1124 11:45:57.823219 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/150bf756-4c2b-4187-8e7c-c323de6d413e-var-run\") pod \"ovn-controller-zh2n4-config-gt7vq\" (UID: \"150bf756-4c2b-4187-8e7c-c323de6d413e\") " pod="openstack/ovn-controller-zh2n4-config-gt7vq" Nov 24 11:45:57 crc kubenswrapper[4789]: I1124 11:45:57.823382 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/150bf756-4c2b-4187-8e7c-c323de6d413e-var-run\") pod \"ovn-controller-zh2n4-config-gt7vq\" (UID: \"150bf756-4c2b-4187-8e7c-c323de6d413e\") " pod="openstack/ovn-controller-zh2n4-config-gt7vq" Nov 24 11:45:57 crc kubenswrapper[4789]: I1124 11:45:57.823442 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/150bf756-4c2b-4187-8e7c-c323de6d413e-var-run-ovn\") pod \"ovn-controller-zh2n4-config-gt7vq\" (UID: \"150bf756-4c2b-4187-8e7c-c323de6d413e\") " pod="openstack/ovn-controller-zh2n4-config-gt7vq" Nov 24 11:45:57 crc kubenswrapper[4789]: I1124 11:45:57.824036 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/150bf756-4c2b-4187-8e7c-c323de6d413e-additional-scripts\") pod \"ovn-controller-zh2n4-config-gt7vq\" (UID: \"150bf756-4c2b-4187-8e7c-c323de6d413e\") " pod="openstack/ovn-controller-zh2n4-config-gt7vq" Nov 24 11:45:57 crc kubenswrapper[4789]: I1124 11:45:57.824119 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/150bf756-4c2b-4187-8e7c-c323de6d413e-var-log-ovn\") pod \"ovn-controller-zh2n4-config-gt7vq\" (UID: \"150bf756-4c2b-4187-8e7c-c323de6d413e\") " pod="openstack/ovn-controller-zh2n4-config-gt7vq" Nov 24 11:45:57 crc kubenswrapper[4789]: I1124 11:45:57.826992 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/150bf756-4c2b-4187-8e7c-c323de6d413e-scripts\") pod \"ovn-controller-zh2n4-config-gt7vq\" (UID: \"150bf756-4c2b-4187-8e7c-c323de6d413e\") " pod="openstack/ovn-controller-zh2n4-config-gt7vq" Nov 24 11:45:57 crc kubenswrapper[4789]: I1124 11:45:57.853878 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xqp2\" (UniqueName: \"kubernetes.io/projected/150bf756-4c2b-4187-8e7c-c323de6d413e-kube-api-access-2xqp2\") pod \"ovn-controller-zh2n4-config-gt7vq\" (UID: \"150bf756-4c2b-4187-8e7c-c323de6d413e\") " pod="openstack/ovn-controller-zh2n4-config-gt7vq" Nov 24 11:45:58 crc kubenswrapper[4789]: I1124 11:45:58.013219 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zh2n4-config-gt7vq" Nov 24 11:45:58 crc kubenswrapper[4789]: I1124 11:45:58.350671 4789 generic.go:334] "Generic (PLEG): container finished" podID="ad2c0f97-8696-425d-bd5a-42a24bee8297" containerID="a664d29c1069225aca624a58f7f6bad45e8a79e6507290fb266b0b826e03e680" exitCode=0 Nov 24 11:45:58 crc kubenswrapper[4789]: I1124 11:45:58.350751 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ad2c0f97-8696-425d-bd5a-42a24bee8297","Type":"ContainerDied","Data":"a664d29c1069225aca624a58f7f6bad45e8a79e6507290fb266b0b826e03e680"} Nov 24 11:45:58 crc kubenswrapper[4789]: I1124 11:45:58.354913 4789 generic.go:334] "Generic (PLEG): container finished" podID="4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e" containerID="9a28c3039c74fe442ed3bbd247f272af8ce6498883c5cf3377a5ba815e084551" exitCode=0 Nov 24 11:45:58 crc kubenswrapper[4789]: I1124 11:45:58.354997 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e","Type":"ContainerDied","Data":"9a28c3039c74fe442ed3bbd247f272af8ce6498883c5cf3377a5ba815e084551"} Nov 24 11:46:02 crc kubenswrapper[4789]: I1124 11:46:02.402901 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-zh2n4" podUID="c77484cd-66ed-4471-9136-5e44eadd28ad" containerName="ovn-controller" probeResult="failure" output=< Nov 24 11:46:02 crc kubenswrapper[4789]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 24 11:46:02 crc kubenswrapper[4789]: > Nov 24 11:46:02 crc kubenswrapper[4789]: I1124 11:46:02.767974 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zh2n4-config-gt7vq"] Nov 24 11:46:02 crc kubenswrapper[4789]: W1124 11:46:02.775955 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod150bf756_4c2b_4187_8e7c_c323de6d413e.slice/crio-7a67f7f34a185316d88e24eecf82298a3cd6ce560e85b73cfa189c14cef1805a WatchSource:0}: Error finding container 7a67f7f34a185316d88e24eecf82298a3cd6ce560e85b73cfa189c14cef1805a: Status 404 returned error can't find the container with id 7a67f7f34a185316d88e24eecf82298a3cd6ce560e85b73cfa189c14cef1805a Nov 24 11:46:03 crc kubenswrapper[4789]: I1124 11:46:03.403518 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ad2c0f97-8696-425d-bd5a-42a24bee8297","Type":"ContainerStarted","Data":"7021cc39c31aa6c4138f62bc54f62a8a1a86cc310c60d75d51202b5fe449c5b8"} Nov 24 11:46:03 crc kubenswrapper[4789]: I1124 11:46:03.403972 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:46:03 crc kubenswrapper[4789]: I1124 11:46:03.408126 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e","Type":"ContainerStarted","Data":"189900dc95c48e8a3e902afa5bfccbfac9e8012793dfb430113a563c463e6eb9"} Nov 24 11:46:03 crc kubenswrapper[4789]: I1124 11:46:03.418581 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 24 11:46:03 crc kubenswrapper[4789]: I1124 11:46:03.450180 4789 generic.go:334] "Generic (PLEG): container finished" podID="150bf756-4c2b-4187-8e7c-c323de6d413e" containerID="3865617a9e3d9f4a6c335d0b89d7ca697efada950648d2bace3fc1c19a4236c9" exitCode=0 Nov 24 11:46:03 crc kubenswrapper[4789]: I1124 11:46:03.450338 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zh2n4-config-gt7vq" event={"ID":"150bf756-4c2b-4187-8e7c-c323de6d413e","Type":"ContainerDied","Data":"3865617a9e3d9f4a6c335d0b89d7ca697efada950648d2bace3fc1c19a4236c9"} Nov 24 11:46:03 crc kubenswrapper[4789]: I1124 11:46:03.450358 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zh2n4-config-gt7vq" event={"ID":"150bf756-4c2b-4187-8e7c-c323de6d413e","Type":"ContainerStarted","Data":"7a67f7f34a185316d88e24eecf82298a3cd6ce560e85b73cfa189c14cef1805a"} Nov 24 11:46:03 crc kubenswrapper[4789]: I1124 11:46:03.454777 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ncww2" event={"ID":"62d7feaf-71e2-4d0e-b0b9-2f61eb421522","Type":"ContainerStarted","Data":"697b5f7294de6d915708d15465c6ac3301ba6fcc77c785d7366e7147d9b854d9"} Nov 24 11:46:03 crc kubenswrapper[4789]: I1124 11:46:03.472335 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=55.223320458 podStartE2EDuration="1m7.472317409s" podCreationTimestamp="2025-11-24 11:44:56 +0000 UTC" firstStartedPulling="2025-11-24 11:45:11.090671104 +0000 UTC m=+893.673142493" lastFinishedPulling="2025-11-24 11:45:23.339668075 +0000 UTC m=+905.922139444" observedRunningTime="2025-11-24 11:46:03.446858553 +0000 UTC m=+946.029329932" watchObservedRunningTime="2025-11-24 11:46:03.472317409 +0000 UTC m=+946.054788778" Nov 24 11:46:03 crc kubenswrapper[4789]: I1124 11:46:03.475883 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=53.940018181 podStartE2EDuration="1m7.475872805s" podCreationTimestamp="2025-11-24 11:44:56 +0000 UTC" firstStartedPulling="2025-11-24 11:45:10.200866632 +0000 UTC m=+892.783338011" lastFinishedPulling="2025-11-24 11:45:23.736721256 +0000 UTC m=+906.319192635" observedRunningTime="2025-11-24 11:46:03.469283375 +0000 UTC m=+946.051754774" watchObservedRunningTime="2025-11-24 11:46:03.475872805 +0000 UTC m=+946.058344184" Nov 24 11:46:03 crc kubenswrapper[4789]: I1124 11:46:03.505055 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-ncww2" podStartSLOduration=2.981707333 podStartE2EDuration="17.505028181s" podCreationTimestamp="2025-11-24 11:45:46 +0000 UTC" firstStartedPulling="2025-11-24 11:45:47.958105061 +0000 UTC m=+930.540576450" lastFinishedPulling="2025-11-24 11:46:02.481425919 +0000 UTC m=+945.063897298" observedRunningTime="2025-11-24 11:46:03.50291531 +0000 UTC m=+946.085386689" watchObservedRunningTime="2025-11-24 11:46:03.505028181 +0000 UTC m=+946.087499570" Nov 24 11:46:05 crc kubenswrapper[4789]: I1124 11:46:05.049786 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zh2n4-config-gt7vq" Nov 24 11:46:05 crc kubenswrapper[4789]: I1124 11:46:05.177296 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xqp2\" (UniqueName: \"kubernetes.io/projected/150bf756-4c2b-4187-8e7c-c323de6d413e-kube-api-access-2xqp2\") pod \"150bf756-4c2b-4187-8e7c-c323de6d413e\" (UID: \"150bf756-4c2b-4187-8e7c-c323de6d413e\") " Nov 24 11:46:05 crc kubenswrapper[4789]: I1124 11:46:05.177406 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/150bf756-4c2b-4187-8e7c-c323de6d413e-scripts\") pod \"150bf756-4c2b-4187-8e7c-c323de6d413e\" (UID: \"150bf756-4c2b-4187-8e7c-c323de6d413e\") " Nov 24 11:46:05 crc kubenswrapper[4789]: I1124 11:46:05.177443 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/150bf756-4c2b-4187-8e7c-c323de6d413e-var-run-ovn\") pod \"150bf756-4c2b-4187-8e7c-c323de6d413e\" (UID: \"150bf756-4c2b-4187-8e7c-c323de6d413e\") " Nov 24 11:46:05 crc kubenswrapper[4789]: I1124 11:46:05.177478 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/150bf756-4c2b-4187-8e7c-c323de6d413e-var-run\") pod \"150bf756-4c2b-4187-8e7c-c323de6d413e\" (UID: \"150bf756-4c2b-4187-8e7c-c323de6d413e\") " Nov 24 11:46:05 crc kubenswrapper[4789]: I1124 11:46:05.177507 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/150bf756-4c2b-4187-8e7c-c323de6d413e-additional-scripts\") pod \"150bf756-4c2b-4187-8e7c-c323de6d413e\" (UID: \"150bf756-4c2b-4187-8e7c-c323de6d413e\") " Nov 24 11:46:05 crc kubenswrapper[4789]: I1124 11:46:05.177553 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/150bf756-4c2b-4187-8e7c-c323de6d413e-var-log-ovn\") pod \"150bf756-4c2b-4187-8e7c-c323de6d413e\" (UID: \"150bf756-4c2b-4187-8e7c-c323de6d413e\") " Nov 24 11:46:05 crc kubenswrapper[4789]: I1124 11:46:05.177544 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/150bf756-4c2b-4187-8e7c-c323de6d413e-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "150bf756-4c2b-4187-8e7c-c323de6d413e" (UID: "150bf756-4c2b-4187-8e7c-c323de6d413e"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:46:05 crc kubenswrapper[4789]: I1124 11:46:05.177593 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/150bf756-4c2b-4187-8e7c-c323de6d413e-var-run" (OuterVolumeSpecName: "var-run") pod "150bf756-4c2b-4187-8e7c-c323de6d413e" (UID: "150bf756-4c2b-4187-8e7c-c323de6d413e"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:46:05 crc kubenswrapper[4789]: I1124 11:46:05.177716 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/150bf756-4c2b-4187-8e7c-c323de6d413e-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "150bf756-4c2b-4187-8e7c-c323de6d413e" (UID: "150bf756-4c2b-4187-8e7c-c323de6d413e"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:46:05 crc kubenswrapper[4789]: I1124 11:46:05.178102 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/150bf756-4c2b-4187-8e7c-c323de6d413e-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "150bf756-4c2b-4187-8e7c-c323de6d413e" (UID: "150bf756-4c2b-4187-8e7c-c323de6d413e"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:46:05 crc kubenswrapper[4789]: I1124 11:46:05.178178 4789 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/150bf756-4c2b-4187-8e7c-c323de6d413e-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:05 crc kubenswrapper[4789]: I1124 11:46:05.178199 4789 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/150bf756-4c2b-4187-8e7c-c323de6d413e-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:05 crc kubenswrapper[4789]: I1124 11:46:05.178209 4789 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/150bf756-4c2b-4187-8e7c-c323de6d413e-var-run\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:05 crc kubenswrapper[4789]: I1124 11:46:05.178313 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/150bf756-4c2b-4187-8e7c-c323de6d413e-scripts" (OuterVolumeSpecName: "scripts") pod "150bf756-4c2b-4187-8e7c-c323de6d413e" (UID: "150bf756-4c2b-4187-8e7c-c323de6d413e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:46:05 crc kubenswrapper[4789]: I1124 11:46:05.187750 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/150bf756-4c2b-4187-8e7c-c323de6d413e-kube-api-access-2xqp2" (OuterVolumeSpecName: "kube-api-access-2xqp2") pod "150bf756-4c2b-4187-8e7c-c323de6d413e" (UID: "150bf756-4c2b-4187-8e7c-c323de6d413e"). InnerVolumeSpecName "kube-api-access-2xqp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:46:05 crc kubenswrapper[4789]: I1124 11:46:05.279217 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xqp2\" (UniqueName: \"kubernetes.io/projected/150bf756-4c2b-4187-8e7c-c323de6d413e-kube-api-access-2xqp2\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:05 crc kubenswrapper[4789]: I1124 11:46:05.279253 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/150bf756-4c2b-4187-8e7c-c323de6d413e-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:05 crc kubenswrapper[4789]: I1124 11:46:05.279263 4789 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/150bf756-4c2b-4187-8e7c-c323de6d413e-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:05 crc kubenswrapper[4789]: I1124 11:46:05.473066 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zh2n4-config-gt7vq" event={"ID":"150bf756-4c2b-4187-8e7c-c323de6d413e","Type":"ContainerDied","Data":"7a67f7f34a185316d88e24eecf82298a3cd6ce560e85b73cfa189c14cef1805a"} Nov 24 11:46:05 crc kubenswrapper[4789]: I1124 11:46:05.473134 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a67f7f34a185316d88e24eecf82298a3cd6ce560e85b73cfa189c14cef1805a" Nov 24 11:46:05 crc kubenswrapper[4789]: I1124 11:46:05.473218 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zh2n4-config-gt7vq" Nov 24 11:46:06 crc kubenswrapper[4789]: I1124 11:46:06.178217 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-zh2n4-config-gt7vq"] Nov 24 11:46:06 crc kubenswrapper[4789]: I1124 11:46:06.178261 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-zh2n4-config-gt7vq"] Nov 24 11:46:07 crc kubenswrapper[4789]: I1124 11:46:07.371180 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-zh2n4" Nov 24 11:46:08 crc kubenswrapper[4789]: I1124 11:46:08.181433 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="150bf756-4c2b-4187-8e7c-c323de6d413e" path="/var/lib/kubelet/pods/150bf756-4c2b-4187-8e7c-c323de6d413e/volumes" Nov 24 11:46:09 crc kubenswrapper[4789]: I1124 11:46:09.501079 4789 generic.go:334] "Generic (PLEG): container finished" podID="62d7feaf-71e2-4d0e-b0b9-2f61eb421522" containerID="697b5f7294de6d915708d15465c6ac3301ba6fcc77c785d7366e7147d9b854d9" exitCode=0 Nov 24 11:46:09 crc kubenswrapper[4789]: I1124 11:46:09.501168 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ncww2" event={"ID":"62d7feaf-71e2-4d0e-b0b9-2f61eb421522","Type":"ContainerDied","Data":"697b5f7294de6d915708d15465c6ac3301ba6fcc77c785d7366e7147d9b854d9"} Nov 24 11:46:10 crc kubenswrapper[4789]: I1124 11:46:10.944352 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ncww2" Nov 24 11:46:11 crc kubenswrapper[4789]: I1124 11:46:11.074454 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/62d7feaf-71e2-4d0e-b0b9-2f61eb421522-db-sync-config-data\") pod \"62d7feaf-71e2-4d0e-b0b9-2f61eb421522\" (UID: \"62d7feaf-71e2-4d0e-b0b9-2f61eb421522\") " Nov 24 11:46:11 crc kubenswrapper[4789]: I1124 11:46:11.074571 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62d7feaf-71e2-4d0e-b0b9-2f61eb421522-combined-ca-bundle\") pod \"62d7feaf-71e2-4d0e-b0b9-2f61eb421522\" (UID: \"62d7feaf-71e2-4d0e-b0b9-2f61eb421522\") " Nov 24 11:46:11 crc kubenswrapper[4789]: I1124 11:46:11.074601 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62d7feaf-71e2-4d0e-b0b9-2f61eb421522-config-data\") pod \"62d7feaf-71e2-4d0e-b0b9-2f61eb421522\" (UID: \"62d7feaf-71e2-4d0e-b0b9-2f61eb421522\") " Nov 24 11:46:11 crc kubenswrapper[4789]: I1124 11:46:11.074658 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ct8p\" (UniqueName: \"kubernetes.io/projected/62d7feaf-71e2-4d0e-b0b9-2f61eb421522-kube-api-access-7ct8p\") pod \"62d7feaf-71e2-4d0e-b0b9-2f61eb421522\" (UID: \"62d7feaf-71e2-4d0e-b0b9-2f61eb421522\") " Nov 24 11:46:11 crc kubenswrapper[4789]: I1124 11:46:11.092746 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62d7feaf-71e2-4d0e-b0b9-2f61eb421522-kube-api-access-7ct8p" (OuterVolumeSpecName: "kube-api-access-7ct8p") pod "62d7feaf-71e2-4d0e-b0b9-2f61eb421522" (UID: "62d7feaf-71e2-4d0e-b0b9-2f61eb421522"). InnerVolumeSpecName "kube-api-access-7ct8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:46:11 crc kubenswrapper[4789]: I1124 11:46:11.099718 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62d7feaf-71e2-4d0e-b0b9-2f61eb421522-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "62d7feaf-71e2-4d0e-b0b9-2f61eb421522" (UID: "62d7feaf-71e2-4d0e-b0b9-2f61eb421522"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:46:11 crc kubenswrapper[4789]: I1124 11:46:11.105017 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62d7feaf-71e2-4d0e-b0b9-2f61eb421522-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62d7feaf-71e2-4d0e-b0b9-2f61eb421522" (UID: "62d7feaf-71e2-4d0e-b0b9-2f61eb421522"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:46:11 crc kubenswrapper[4789]: I1124 11:46:11.140663 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62d7feaf-71e2-4d0e-b0b9-2f61eb421522-config-data" (OuterVolumeSpecName: "config-data") pod "62d7feaf-71e2-4d0e-b0b9-2f61eb421522" (UID: "62d7feaf-71e2-4d0e-b0b9-2f61eb421522"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:46:11 crc kubenswrapper[4789]: I1124 11:46:11.176054 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62d7feaf-71e2-4d0e-b0b9-2f61eb421522-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:11 crc kubenswrapper[4789]: I1124 11:46:11.176088 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62d7feaf-71e2-4d0e-b0b9-2f61eb421522-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:11 crc kubenswrapper[4789]: I1124 11:46:11.176101 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ct8p\" (UniqueName: \"kubernetes.io/projected/62d7feaf-71e2-4d0e-b0b9-2f61eb421522-kube-api-access-7ct8p\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:11 crc kubenswrapper[4789]: I1124 11:46:11.176111 4789 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/62d7feaf-71e2-4d0e-b0b9-2f61eb421522-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:11 crc kubenswrapper[4789]: I1124 11:46:11.519428 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ncww2" event={"ID":"62d7feaf-71e2-4d0e-b0b9-2f61eb421522","Type":"ContainerDied","Data":"2cdc6b43fafd4cceff57334149a20f58e1b1101afc4ebbf0c5263dc078fe31a2"} Nov 24 11:46:11 crc kubenswrapper[4789]: I1124 11:46:11.519479 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cdc6b43fafd4cceff57334149a20f58e1b1101afc4ebbf0c5263dc078fe31a2" Nov 24 11:46:11 crc kubenswrapper[4789]: I1124 11:46:11.519930 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ncww2" Nov 24 11:46:12 crc kubenswrapper[4789]: I1124 11:46:12.118215 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-2mcd8"] Nov 24 11:46:12 crc kubenswrapper[4789]: E1124 11:46:12.118740 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62d7feaf-71e2-4d0e-b0b9-2f61eb421522" containerName="glance-db-sync" Nov 24 11:46:12 crc kubenswrapper[4789]: I1124 11:46:12.118752 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="62d7feaf-71e2-4d0e-b0b9-2f61eb421522" containerName="glance-db-sync" Nov 24 11:46:12 crc kubenswrapper[4789]: E1124 11:46:12.118762 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="150bf756-4c2b-4187-8e7c-c323de6d413e" containerName="ovn-config" Nov 24 11:46:12 crc kubenswrapper[4789]: I1124 11:46:12.118767 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="150bf756-4c2b-4187-8e7c-c323de6d413e" containerName="ovn-config" Nov 24 11:46:12 crc kubenswrapper[4789]: I1124 11:46:12.118914 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="150bf756-4c2b-4187-8e7c-c323de6d413e" containerName="ovn-config" Nov 24 11:46:12 crc kubenswrapper[4789]: I1124 11:46:12.118927 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="62d7feaf-71e2-4d0e-b0b9-2f61eb421522" containerName="glance-db-sync" Nov 24 11:46:12 crc kubenswrapper[4789]: I1124 11:46:12.119706 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-2mcd8" Nov 24 11:46:12 crc kubenswrapper[4789]: I1124 11:46:12.148550 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-2mcd8"] Nov 24 11:46:12 crc kubenswrapper[4789]: I1124 11:46:12.193153 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2dbeaf7-abf7-4d60-a27f-e60b91597b44-config\") pod \"dnsmasq-dns-554567b4f7-2mcd8\" (UID: \"b2dbeaf7-abf7-4d60-a27f-e60b91597b44\") " pod="openstack/dnsmasq-dns-554567b4f7-2mcd8" Nov 24 11:46:12 crc kubenswrapper[4789]: I1124 11:46:12.193214 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2dbeaf7-abf7-4d60-a27f-e60b91597b44-ovsdbserver-nb\") pod \"dnsmasq-dns-554567b4f7-2mcd8\" (UID: \"b2dbeaf7-abf7-4d60-a27f-e60b91597b44\") " pod="openstack/dnsmasq-dns-554567b4f7-2mcd8" Nov 24 11:46:12 crc kubenswrapper[4789]: I1124 11:46:12.193239 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2dbeaf7-abf7-4d60-a27f-e60b91597b44-dns-svc\") pod \"dnsmasq-dns-554567b4f7-2mcd8\" (UID: \"b2dbeaf7-abf7-4d60-a27f-e60b91597b44\") " pod="openstack/dnsmasq-dns-554567b4f7-2mcd8" Nov 24 11:46:12 crc kubenswrapper[4789]: I1124 11:46:12.193418 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2dbeaf7-abf7-4d60-a27f-e60b91597b44-ovsdbserver-sb\") pod \"dnsmasq-dns-554567b4f7-2mcd8\" (UID: \"b2dbeaf7-abf7-4d60-a27f-e60b91597b44\") " pod="openstack/dnsmasq-dns-554567b4f7-2mcd8" Nov 24 11:46:12 crc kubenswrapper[4789]: I1124 11:46:12.193704 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtg2n\" (UniqueName: \"kubernetes.io/projected/b2dbeaf7-abf7-4d60-a27f-e60b91597b44-kube-api-access-gtg2n\") pod \"dnsmasq-dns-554567b4f7-2mcd8\" (UID: \"b2dbeaf7-abf7-4d60-a27f-e60b91597b44\") " pod="openstack/dnsmasq-dns-554567b4f7-2mcd8" Nov 24 11:46:12 crc kubenswrapper[4789]: I1124 11:46:12.294985 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtg2n\" (UniqueName: \"kubernetes.io/projected/b2dbeaf7-abf7-4d60-a27f-e60b91597b44-kube-api-access-gtg2n\") pod \"dnsmasq-dns-554567b4f7-2mcd8\" (UID: \"b2dbeaf7-abf7-4d60-a27f-e60b91597b44\") " pod="openstack/dnsmasq-dns-554567b4f7-2mcd8" Nov 24 11:46:12 crc kubenswrapper[4789]: I1124 11:46:12.295072 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2dbeaf7-abf7-4d60-a27f-e60b91597b44-config\") pod \"dnsmasq-dns-554567b4f7-2mcd8\" (UID: \"b2dbeaf7-abf7-4d60-a27f-e60b91597b44\") " pod="openstack/dnsmasq-dns-554567b4f7-2mcd8" Nov 24 11:46:12 crc kubenswrapper[4789]: I1124 11:46:12.295109 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2dbeaf7-abf7-4d60-a27f-e60b91597b44-ovsdbserver-nb\") pod \"dnsmasq-dns-554567b4f7-2mcd8\" (UID: \"b2dbeaf7-abf7-4d60-a27f-e60b91597b44\") " pod="openstack/dnsmasq-dns-554567b4f7-2mcd8" Nov 24 11:46:12 crc kubenswrapper[4789]: I1124 11:46:12.295134 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2dbeaf7-abf7-4d60-a27f-e60b91597b44-dns-svc\") pod \"dnsmasq-dns-554567b4f7-2mcd8\" (UID: \"b2dbeaf7-abf7-4d60-a27f-e60b91597b44\") " pod="openstack/dnsmasq-dns-554567b4f7-2mcd8" Nov 24 11:46:12 crc kubenswrapper[4789]: I1124 11:46:12.295169 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2dbeaf7-abf7-4d60-a27f-e60b91597b44-ovsdbserver-sb\") pod \"dnsmasq-dns-554567b4f7-2mcd8\" (UID: \"b2dbeaf7-abf7-4d60-a27f-e60b91597b44\") " pod="openstack/dnsmasq-dns-554567b4f7-2mcd8" Nov 24 11:46:12 crc kubenswrapper[4789]: I1124 11:46:12.296054 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2dbeaf7-abf7-4d60-a27f-e60b91597b44-ovsdbserver-sb\") pod \"dnsmasq-dns-554567b4f7-2mcd8\" (UID: \"b2dbeaf7-abf7-4d60-a27f-e60b91597b44\") " pod="openstack/dnsmasq-dns-554567b4f7-2mcd8" Nov 24 11:46:12 crc kubenswrapper[4789]: I1124 11:46:12.296867 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2dbeaf7-abf7-4d60-a27f-e60b91597b44-config\") pod \"dnsmasq-dns-554567b4f7-2mcd8\" (UID: \"b2dbeaf7-abf7-4d60-a27f-e60b91597b44\") " pod="openstack/dnsmasq-dns-554567b4f7-2mcd8" Nov 24 11:46:12 crc kubenswrapper[4789]: I1124 11:46:12.297366 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2dbeaf7-abf7-4d60-a27f-e60b91597b44-ovsdbserver-nb\") pod \"dnsmasq-dns-554567b4f7-2mcd8\" (UID: \"b2dbeaf7-abf7-4d60-a27f-e60b91597b44\") " pod="openstack/dnsmasq-dns-554567b4f7-2mcd8" Nov 24 11:46:12 crc kubenswrapper[4789]: I1124 11:46:12.297888 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2dbeaf7-abf7-4d60-a27f-e60b91597b44-dns-svc\") pod \"dnsmasq-dns-554567b4f7-2mcd8\" (UID: \"b2dbeaf7-abf7-4d60-a27f-e60b91597b44\") " pod="openstack/dnsmasq-dns-554567b4f7-2mcd8" Nov 24 11:46:12 crc kubenswrapper[4789]: I1124 11:46:12.315017 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtg2n\" (UniqueName: \"kubernetes.io/projected/b2dbeaf7-abf7-4d60-a27f-e60b91597b44-kube-api-access-gtg2n\") pod \"dnsmasq-dns-554567b4f7-2mcd8\" (UID: \"b2dbeaf7-abf7-4d60-a27f-e60b91597b44\") " pod="openstack/dnsmasq-dns-554567b4f7-2mcd8" Nov 24 11:46:12 crc kubenswrapper[4789]: I1124 11:46:12.433963 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-2mcd8" Nov 24 11:46:12 crc kubenswrapper[4789]: I1124 11:46:12.716611 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-2mcd8"] Nov 24 11:46:13 crc kubenswrapper[4789]: I1124 11:46:13.539243 4789 generic.go:334] "Generic (PLEG): container finished" podID="b2dbeaf7-abf7-4d60-a27f-e60b91597b44" containerID="85d93876c6ae5d5c0f07793b37a6aed37075742b066c1e8e08debe771caba1d5" exitCode=0 Nov 24 11:46:13 crc kubenswrapper[4789]: I1124 11:46:13.539338 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-2mcd8" event={"ID":"b2dbeaf7-abf7-4d60-a27f-e60b91597b44","Type":"ContainerDied","Data":"85d93876c6ae5d5c0f07793b37a6aed37075742b066c1e8e08debe771caba1d5"} Nov 24 11:46:13 crc kubenswrapper[4789]: I1124 11:46:13.539621 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-2mcd8" event={"ID":"b2dbeaf7-abf7-4d60-a27f-e60b91597b44","Type":"ContainerStarted","Data":"c61e9fc1181257d9524e694e1c8bc1e819b8735dda5ace09b80b1ac3e8dc4910"} Nov 24 11:46:14 crc kubenswrapper[4789]: I1124 11:46:14.549263 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-2mcd8" event={"ID":"b2dbeaf7-abf7-4d60-a27f-e60b91597b44","Type":"ContainerStarted","Data":"5377bf93bbcf30611581a21ec01c42b0fd1c463c51d24ff0155e87586e5c76e5"} Nov 24 11:46:14 crc kubenswrapper[4789]: I1124 11:46:14.549789 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-554567b4f7-2mcd8" Nov 24 11:46:14 crc kubenswrapper[4789]: I1124 11:46:14.575506 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-554567b4f7-2mcd8" podStartSLOduration=2.575485704 podStartE2EDuration="2.575485704s" podCreationTimestamp="2025-11-24 11:46:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:46:14.571635791 +0000 UTC m=+957.154107160" watchObservedRunningTime="2025-11-24 11:46:14.575485704 +0000 UTC m=+957.157957093" Nov 24 11:46:18 crc kubenswrapper[4789]: I1124 11:46:18.279767 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 24 11:46:18 crc kubenswrapper[4789]: I1124 11:46:18.354713 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:46:18 crc kubenswrapper[4789]: I1124 11:46:18.688073 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-dc2mk"] Nov 24 11:46:18 crc kubenswrapper[4789]: I1124 11:46:18.695454 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dc2mk" Nov 24 11:46:18 crc kubenswrapper[4789]: I1124 11:46:18.719558 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-dc2mk"] Nov 24 11:46:18 crc kubenswrapper[4789]: I1124 11:46:18.810447 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94jhd\" (UniqueName: \"kubernetes.io/projected/20a4d7a1-39fa-4ab6-add9-7258bb865809-kube-api-access-94jhd\") pod \"cinder-db-create-dc2mk\" (UID: \"20a4d7a1-39fa-4ab6-add9-7258bb865809\") " pod="openstack/cinder-db-create-dc2mk" Nov 24 11:46:18 crc kubenswrapper[4789]: I1124 11:46:18.810576 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20a4d7a1-39fa-4ab6-add9-7258bb865809-operator-scripts\") pod \"cinder-db-create-dc2mk\" (UID: \"20a4d7a1-39fa-4ab6-add9-7258bb865809\") " pod="openstack/cinder-db-create-dc2mk" Nov 24 11:46:18 crc kubenswrapper[4789]: I1124 11:46:18.871975 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-mzj6x"] Nov 24 11:46:18 crc kubenswrapper[4789]: I1124 11:46:18.873733 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mzj6x" Nov 24 11:46:18 crc kubenswrapper[4789]: I1124 11:46:18.889442 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-mzj6x"] Nov 24 11:46:18 crc kubenswrapper[4789]: I1124 11:46:18.901764 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-3375-account-create-nntd8"] Nov 24 11:46:18 crc kubenswrapper[4789]: I1124 11:46:18.912548 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20a4d7a1-39fa-4ab6-add9-7258bb865809-operator-scripts\") pod \"cinder-db-create-dc2mk\" (UID: \"20a4d7a1-39fa-4ab6-add9-7258bb865809\") " pod="openstack/cinder-db-create-dc2mk" Nov 24 11:46:18 crc kubenswrapper[4789]: I1124 11:46:18.912702 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94jhd\" (UniqueName: \"kubernetes.io/projected/20a4d7a1-39fa-4ab6-add9-7258bb865809-kube-api-access-94jhd\") pod \"cinder-db-create-dc2mk\" (UID: \"20a4d7a1-39fa-4ab6-add9-7258bb865809\") " pod="openstack/cinder-db-create-dc2mk" Nov 24 11:46:18 crc kubenswrapper[4789]: I1124 11:46:18.913238 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20a4d7a1-39fa-4ab6-add9-7258bb865809-operator-scripts\") pod \"cinder-db-create-dc2mk\" (UID: \"20a4d7a1-39fa-4ab6-add9-7258bb865809\") " pod="openstack/cinder-db-create-dc2mk" Nov 24 11:46:18 crc kubenswrapper[4789]: I1124 11:46:18.918037 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3375-account-create-nntd8" Nov 24 11:46:18 crc kubenswrapper[4789]: I1124 11:46:18.931629 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 24 11:46:18 crc kubenswrapper[4789]: I1124 11:46:18.933391 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3375-account-create-nntd8"] Nov 24 11:46:18 crc kubenswrapper[4789]: I1124 11:46:18.938027 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94jhd\" (UniqueName: \"kubernetes.io/projected/20a4d7a1-39fa-4ab6-add9-7258bb865809-kube-api-access-94jhd\") pod \"cinder-db-create-dc2mk\" (UID: \"20a4d7a1-39fa-4ab6-add9-7258bb865809\") " pod="openstack/cinder-db-create-dc2mk" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.014899 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a523a3ed-09c8-4752-8b89-562cbb1c80c1-operator-scripts\") pod \"barbican-db-create-mzj6x\" (UID: \"a523a3ed-09c8-4752-8b89-562cbb1c80c1\") " pod="openstack/barbican-db-create-mzj6x" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.015211 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n74rn\" (UniqueName: \"kubernetes.io/projected/85dfb49e-554c-415f-9add-67bb02165386-kube-api-access-n74rn\") pod \"barbican-3375-account-create-nntd8\" (UID: \"85dfb49e-554c-415f-9add-67bb02165386\") " pod="openstack/barbican-3375-account-create-nntd8" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.015366 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsgjp\" (UniqueName: \"kubernetes.io/projected/a523a3ed-09c8-4752-8b89-562cbb1c80c1-kube-api-access-fsgjp\") pod \"barbican-db-create-mzj6x\" (UID: \"a523a3ed-09c8-4752-8b89-562cbb1c80c1\") " pod="openstack/barbican-db-create-mzj6x" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.015511 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85dfb49e-554c-415f-9add-67bb02165386-operator-scripts\") pod \"barbican-3375-account-create-nntd8\" (UID: \"85dfb49e-554c-415f-9add-67bb02165386\") " pod="openstack/barbican-3375-account-create-nntd8" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.032846 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dc2mk" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.060347 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-vplq8"] Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.061268 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vplq8" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.069563 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-vplq8"] Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.226412 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d8dfa37-0258-4fa8-814f-52c167e55e9c-operator-scripts\") pod \"neutron-db-create-vplq8\" (UID: \"5d8dfa37-0258-4fa8-814f-52c167e55e9c\") " pod="openstack/neutron-db-create-vplq8" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.226753 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv468\" (UniqueName: \"kubernetes.io/projected/5d8dfa37-0258-4fa8-814f-52c167e55e9c-kube-api-access-tv468\") pod \"neutron-db-create-vplq8\" (UID: \"5d8dfa37-0258-4fa8-814f-52c167e55e9c\") " pod="openstack/neutron-db-create-vplq8" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.226800 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a523a3ed-09c8-4752-8b89-562cbb1c80c1-operator-scripts\") pod \"barbican-db-create-mzj6x\" (UID: \"a523a3ed-09c8-4752-8b89-562cbb1c80c1\") " pod="openstack/barbican-db-create-mzj6x" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.226837 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n74rn\" (UniqueName: \"kubernetes.io/projected/85dfb49e-554c-415f-9add-67bb02165386-kube-api-access-n74rn\") pod \"barbican-3375-account-create-nntd8\" (UID: \"85dfb49e-554c-415f-9add-67bb02165386\") " pod="openstack/barbican-3375-account-create-nntd8" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.226865 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsgjp\" (UniqueName: \"kubernetes.io/projected/a523a3ed-09c8-4752-8b89-562cbb1c80c1-kube-api-access-fsgjp\") pod \"barbican-db-create-mzj6x\" (UID: \"a523a3ed-09c8-4752-8b89-562cbb1c80c1\") " pod="openstack/barbican-db-create-mzj6x" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.226907 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85dfb49e-554c-415f-9add-67bb02165386-operator-scripts\") pod \"barbican-3375-account-create-nntd8\" (UID: \"85dfb49e-554c-415f-9add-67bb02165386\") " pod="openstack/barbican-3375-account-create-nntd8" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.227810 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85dfb49e-554c-415f-9add-67bb02165386-operator-scripts\") pod \"barbican-3375-account-create-nntd8\" (UID: \"85dfb49e-554c-415f-9add-67bb02165386\") " pod="openstack/barbican-3375-account-create-nntd8" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.228489 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a523a3ed-09c8-4752-8b89-562cbb1c80c1-operator-scripts\") pod \"barbican-db-create-mzj6x\" (UID: \"a523a3ed-09c8-4752-8b89-562cbb1c80c1\") " pod="openstack/barbican-db-create-mzj6x" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.295655 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsgjp\" (UniqueName: \"kubernetes.io/projected/a523a3ed-09c8-4752-8b89-562cbb1c80c1-kube-api-access-fsgjp\") pod \"barbican-db-create-mzj6x\" (UID: \"a523a3ed-09c8-4752-8b89-562cbb1c80c1\") " pod="openstack/barbican-db-create-mzj6x" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.303530 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n74rn\" (UniqueName: \"kubernetes.io/projected/85dfb49e-554c-415f-9add-67bb02165386-kube-api-access-n74rn\") pod \"barbican-3375-account-create-nntd8\" (UID: \"85dfb49e-554c-415f-9add-67bb02165386\") " pod="openstack/barbican-3375-account-create-nntd8" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.304297 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-rmhhs"] Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.305333 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rmhhs" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.314257 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.314409 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.314531 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.314702 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gpqd2" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.328283 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46218063-8c0c-4d2a-9693-1ee25e647520-combined-ca-bundle\") pod \"keystone-db-sync-rmhhs\" (UID: \"46218063-8c0c-4d2a-9693-1ee25e647520\") " pod="openstack/keystone-db-sync-rmhhs" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.328339 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46218063-8c0c-4d2a-9693-1ee25e647520-config-data\") pod \"keystone-db-sync-rmhhs\" (UID: \"46218063-8c0c-4d2a-9693-1ee25e647520\") " pod="openstack/keystone-db-sync-rmhhs" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.328379 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz8zw\" (UniqueName: \"kubernetes.io/projected/46218063-8c0c-4d2a-9693-1ee25e647520-kube-api-access-fz8zw\") pod \"keystone-db-sync-rmhhs\" (UID: \"46218063-8c0c-4d2a-9693-1ee25e647520\") " pod="openstack/keystone-db-sync-rmhhs" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.328412 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d8dfa37-0258-4fa8-814f-52c167e55e9c-operator-scripts\") pod \"neutron-db-create-vplq8\" (UID: \"5d8dfa37-0258-4fa8-814f-52c167e55e9c\") " pod="openstack/neutron-db-create-vplq8" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.328430 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv468\" (UniqueName: \"kubernetes.io/projected/5d8dfa37-0258-4fa8-814f-52c167e55e9c-kube-api-access-tv468\") pod \"neutron-db-create-vplq8\" (UID: \"5d8dfa37-0258-4fa8-814f-52c167e55e9c\") " pod="openstack/neutron-db-create-vplq8" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.330052 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d8dfa37-0258-4fa8-814f-52c167e55e9c-operator-scripts\") pod \"neutron-db-create-vplq8\" (UID: \"5d8dfa37-0258-4fa8-814f-52c167e55e9c\") " pod="openstack/neutron-db-create-vplq8" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.337293 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-9ffd-account-create-k89vc"] Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.338592 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9ffd-account-create-k89vc" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.347285 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.362933 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv468\" (UniqueName: \"kubernetes.io/projected/5d8dfa37-0258-4fa8-814f-52c167e55e9c-kube-api-access-tv468\") pod \"neutron-db-create-vplq8\" (UID: \"5d8dfa37-0258-4fa8-814f-52c167e55e9c\") " pod="openstack/neutron-db-create-vplq8" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.365178 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-rmhhs"] Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.382176 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9ffd-account-create-k89vc"] Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.402550 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-4176-account-create-6mldw"] Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.403562 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-4176-account-create-6mldw"] Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.403645 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4176-account-create-6mldw" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.405868 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.429767 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46218063-8c0c-4d2a-9693-1ee25e647520-combined-ca-bundle\") pod \"keystone-db-sync-rmhhs\" (UID: \"46218063-8c0c-4d2a-9693-1ee25e647520\") " pod="openstack/keystone-db-sync-rmhhs" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.429986 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46218063-8c0c-4d2a-9693-1ee25e647520-config-data\") pod \"keystone-db-sync-rmhhs\" (UID: \"46218063-8c0c-4d2a-9693-1ee25e647520\") " pod="openstack/keystone-db-sync-rmhhs" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.430093 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz8zw\" (UniqueName: \"kubernetes.io/projected/46218063-8c0c-4d2a-9693-1ee25e647520-kube-api-access-fz8zw\") pod \"keystone-db-sync-rmhhs\" (UID: \"46218063-8c0c-4d2a-9693-1ee25e647520\") " pod="openstack/keystone-db-sync-rmhhs" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.430173 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn6v5\" (UniqueName: \"kubernetes.io/projected/91e382fe-d85a-44e5-8047-e3ddad1a85f4-kube-api-access-nn6v5\") pod \"cinder-4176-account-create-6mldw\" (UID: \"91e382fe-d85a-44e5-8047-e3ddad1a85f4\") " pod="openstack/cinder-4176-account-create-6mldw" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.431669 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91e382fe-d85a-44e5-8047-e3ddad1a85f4-operator-scripts\") pod \"cinder-4176-account-create-6mldw\" (UID: \"91e382fe-d85a-44e5-8047-e3ddad1a85f4\") " pod="openstack/cinder-4176-account-create-6mldw" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.431797 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44mdf\" (UniqueName: \"kubernetes.io/projected/6bba6a0a-259f-4a74-850e-2025f99757e6-kube-api-access-44mdf\") pod \"neutron-9ffd-account-create-k89vc\" (UID: \"6bba6a0a-259f-4a74-850e-2025f99757e6\") " pod="openstack/neutron-9ffd-account-create-k89vc" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.431930 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bba6a0a-259f-4a74-850e-2025f99757e6-operator-scripts\") pod \"neutron-9ffd-account-create-k89vc\" (UID: \"6bba6a0a-259f-4a74-850e-2025f99757e6\") " pod="openstack/neutron-9ffd-account-create-k89vc" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.435920 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46218063-8c0c-4d2a-9693-1ee25e647520-combined-ca-bundle\") pod \"keystone-db-sync-rmhhs\" (UID: \"46218063-8c0c-4d2a-9693-1ee25e647520\") " pod="openstack/keystone-db-sync-rmhhs" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.438659 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46218063-8c0c-4d2a-9693-1ee25e647520-config-data\") pod \"keystone-db-sync-rmhhs\" (UID: \"46218063-8c0c-4d2a-9693-1ee25e647520\") " pod="openstack/keystone-db-sync-rmhhs" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.454890 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz8zw\" (UniqueName: \"kubernetes.io/projected/46218063-8c0c-4d2a-9693-1ee25e647520-kube-api-access-fz8zw\") pod \"keystone-db-sync-rmhhs\" (UID: \"46218063-8c0c-4d2a-9693-1ee25e647520\") " pod="openstack/keystone-db-sync-rmhhs" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.494356 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mzj6x" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.533374 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bba6a0a-259f-4a74-850e-2025f99757e6-operator-scripts\") pod \"neutron-9ffd-account-create-k89vc\" (UID: \"6bba6a0a-259f-4a74-850e-2025f99757e6\") " pod="openstack/neutron-9ffd-account-create-k89vc" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.533505 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn6v5\" (UniqueName: \"kubernetes.io/projected/91e382fe-d85a-44e5-8047-e3ddad1a85f4-kube-api-access-nn6v5\") pod \"cinder-4176-account-create-6mldw\" (UID: \"91e382fe-d85a-44e5-8047-e3ddad1a85f4\") " pod="openstack/cinder-4176-account-create-6mldw" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.533526 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91e382fe-d85a-44e5-8047-e3ddad1a85f4-operator-scripts\") pod \"cinder-4176-account-create-6mldw\" (UID: \"91e382fe-d85a-44e5-8047-e3ddad1a85f4\") " pod="openstack/cinder-4176-account-create-6mldw" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.533547 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44mdf\" (UniqueName: \"kubernetes.io/projected/6bba6a0a-259f-4a74-850e-2025f99757e6-kube-api-access-44mdf\") pod \"neutron-9ffd-account-create-k89vc\" (UID: \"6bba6a0a-259f-4a74-850e-2025f99757e6\") " pod="openstack/neutron-9ffd-account-create-k89vc" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.534276 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bba6a0a-259f-4a74-850e-2025f99757e6-operator-scripts\") pod \"neutron-9ffd-account-create-k89vc\" (UID: \"6bba6a0a-259f-4a74-850e-2025f99757e6\") " pod="openstack/neutron-9ffd-account-create-k89vc" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.534577 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91e382fe-d85a-44e5-8047-e3ddad1a85f4-operator-scripts\") pod \"cinder-4176-account-create-6mldw\" (UID: \"91e382fe-d85a-44e5-8047-e3ddad1a85f4\") " pod="openstack/cinder-4176-account-create-6mldw" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.559873 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn6v5\" (UniqueName: \"kubernetes.io/projected/91e382fe-d85a-44e5-8047-e3ddad1a85f4-kube-api-access-nn6v5\") pod \"cinder-4176-account-create-6mldw\" (UID: \"91e382fe-d85a-44e5-8047-e3ddad1a85f4\") " pod="openstack/cinder-4176-account-create-6mldw" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.559871 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44mdf\" (UniqueName: \"kubernetes.io/projected/6bba6a0a-259f-4a74-850e-2025f99757e6-kube-api-access-44mdf\") pod \"neutron-9ffd-account-create-k89vc\" (UID: \"6bba6a0a-259f-4a74-850e-2025f99757e6\") " pod="openstack/neutron-9ffd-account-create-k89vc" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.583956 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vplq8" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.589954 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3375-account-create-nntd8" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.641109 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rmhhs" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.657204 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9ffd-account-create-k89vc" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.719617 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4176-account-create-6mldw" Nov 24 11:46:19 crc kubenswrapper[4789]: I1124 11:46:19.864232 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-dc2mk"] Nov 24 11:46:20 crc kubenswrapper[4789]: I1124 11:46:20.019328 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-mzj6x"] Nov 24 11:46:20 crc kubenswrapper[4789]: I1124 11:46:20.272298 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3375-account-create-nntd8"] Nov 24 11:46:20 crc kubenswrapper[4789]: I1124 11:46:20.284325 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-vplq8"] Nov 24 11:46:20 crc kubenswrapper[4789]: W1124 11:46:20.480315 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6bba6a0a_259f_4a74_850e_2025f99757e6.slice/crio-f881dc18f34606e943f094c22ca2be6595c8ffce4d8bbce37f93674c84e8e12c WatchSource:0}: Error finding container f881dc18f34606e943f094c22ca2be6595c8ffce4d8bbce37f93674c84e8e12c: Status 404 returned error can't find the container with id f881dc18f34606e943f094c22ca2be6595c8ffce4d8bbce37f93674c84e8e12c Nov 24 11:46:20 crc kubenswrapper[4789]: I1124 11:46:20.486417 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9ffd-account-create-k89vc"] Nov 24 11:46:20 crc kubenswrapper[4789]: I1124 11:46:20.525401 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-rmhhs"] Nov 24 11:46:20 crc kubenswrapper[4789]: W1124 11:46:20.541087 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46218063_8c0c_4d2a_9693_1ee25e647520.slice/crio-7bb423bfb4288cf9e4fb18b194b89f50e2e719bbf695ff4eeab349049b87ba65 WatchSource:0}: Error finding container 7bb423bfb4288cf9e4fb18b194b89f50e2e719bbf695ff4eeab349049b87ba65: Status 404 returned error can't find the container with id 7bb423bfb4288cf9e4fb18b194b89f50e2e719bbf695ff4eeab349049b87ba65 Nov 24 11:46:20 crc kubenswrapper[4789]: I1124 11:46:20.602197 4789 generic.go:334] "Generic (PLEG): container finished" podID="20a4d7a1-39fa-4ab6-add9-7258bb865809" containerID="ab84db5a3cd50cfd4792c1a4cb6de8f1370640d139190120835d22b0a44e71ff" exitCode=0 Nov 24 11:46:20 crc kubenswrapper[4789]: I1124 11:46:20.602287 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-dc2mk" event={"ID":"20a4d7a1-39fa-4ab6-add9-7258bb865809","Type":"ContainerDied","Data":"ab84db5a3cd50cfd4792c1a4cb6de8f1370640d139190120835d22b0a44e71ff"} Nov 24 11:46:20 crc kubenswrapper[4789]: I1124 11:46:20.602322 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-dc2mk" event={"ID":"20a4d7a1-39fa-4ab6-add9-7258bb865809","Type":"ContainerStarted","Data":"c88c95b99c32951f2c1e912cf55e03f888279b990ad0d4ad520d28c955525d02"} Nov 24 11:46:20 crc kubenswrapper[4789]: I1124 11:46:20.607710 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-4176-account-create-6mldw"] Nov 24 11:46:20 crc kubenswrapper[4789]: I1124 11:46:20.611592 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9ffd-account-create-k89vc" event={"ID":"6bba6a0a-259f-4a74-850e-2025f99757e6","Type":"ContainerStarted","Data":"f881dc18f34606e943f094c22ca2be6595c8ffce4d8bbce37f93674c84e8e12c"} Nov 24 11:46:20 crc kubenswrapper[4789]: I1124 11:46:20.616088 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rmhhs" event={"ID":"46218063-8c0c-4d2a-9693-1ee25e647520","Type":"ContainerStarted","Data":"7bb423bfb4288cf9e4fb18b194b89f50e2e719bbf695ff4eeab349049b87ba65"} Nov 24 11:46:20 crc kubenswrapper[4789]: I1124 11:46:20.632731 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-mzj6x" event={"ID":"a523a3ed-09c8-4752-8b89-562cbb1c80c1","Type":"ContainerStarted","Data":"34cd99cb10cabf025b3f8220a3061cae389ee2f725ae0969fb2696ef640c86e4"} Nov 24 11:46:20 crc kubenswrapper[4789]: I1124 11:46:20.633019 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-mzj6x" event={"ID":"a523a3ed-09c8-4752-8b89-562cbb1c80c1","Type":"ContainerStarted","Data":"cf4e58789b0a54cd36c83b6c04de6db06fc566803e118b9d7661b385e1ef3fb1"} Nov 24 11:46:20 crc kubenswrapper[4789]: I1124 11:46:20.644258 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vplq8" event={"ID":"5d8dfa37-0258-4fa8-814f-52c167e55e9c","Type":"ContainerStarted","Data":"9133667257b57c7d071afe53d34c96c371437c3bd80b52d8e60bc9be1d6da32d"} Nov 24 11:46:20 crc kubenswrapper[4789]: I1124 11:46:20.644303 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vplq8" event={"ID":"5d8dfa37-0258-4fa8-814f-52c167e55e9c","Type":"ContainerStarted","Data":"3a5dfbc83311d983cf459b780cc5e87af4084f068dc13f852007dfa4c1ad628b"} Nov 24 11:46:20 crc kubenswrapper[4789]: I1124 11:46:20.651487 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3375-account-create-nntd8" event={"ID":"85dfb49e-554c-415f-9add-67bb02165386","Type":"ContainerStarted","Data":"7d3e6d13861a4724fa638f14c558d0f5c9a2c2dbb59ba3482c94312b453716c3"} Nov 24 11:46:20 crc kubenswrapper[4789]: I1124 11:46:20.651528 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3375-account-create-nntd8" event={"ID":"85dfb49e-554c-415f-9add-67bb02165386","Type":"ContainerStarted","Data":"564b41ce1a4f882821b62e86a9d655314c215b49a5ad6f7cbe28b58a3d6a3baa"} Nov 24 11:46:20 crc kubenswrapper[4789]: I1124 11:46:20.700149 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-3375-account-create-nntd8" podStartSLOduration=2.700125586 podStartE2EDuration="2.700125586s" podCreationTimestamp="2025-11-24 11:46:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:46:20.697748078 +0000 UTC m=+963.280219457" watchObservedRunningTime="2025-11-24 11:46:20.700125586 +0000 UTC m=+963.282596965" Nov 24 11:46:21 crc kubenswrapper[4789]: I1124 11:46:21.673935 4789 generic.go:334] "Generic (PLEG): container finished" podID="6bba6a0a-259f-4a74-850e-2025f99757e6" containerID="f14469d605d098a8407f8971827a51d2f70403e054d6e52ca2ac391f6d0e6abf" exitCode=0 Nov 24 11:46:21 crc kubenswrapper[4789]: I1124 11:46:21.674020 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9ffd-account-create-k89vc" event={"ID":"6bba6a0a-259f-4a74-850e-2025f99757e6","Type":"ContainerDied","Data":"f14469d605d098a8407f8971827a51d2f70403e054d6e52ca2ac391f6d0e6abf"} Nov 24 11:46:21 crc kubenswrapper[4789]: I1124 11:46:21.683529 4789 generic.go:334] "Generic (PLEG): container finished" podID="a523a3ed-09c8-4752-8b89-562cbb1c80c1" containerID="34cd99cb10cabf025b3f8220a3061cae389ee2f725ae0969fb2696ef640c86e4" exitCode=0 Nov 24 11:46:21 crc kubenswrapper[4789]: I1124 11:46:21.683589 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-mzj6x" event={"ID":"a523a3ed-09c8-4752-8b89-562cbb1c80c1","Type":"ContainerDied","Data":"34cd99cb10cabf025b3f8220a3061cae389ee2f725ae0969fb2696ef640c86e4"} Nov 24 11:46:21 crc kubenswrapper[4789]: I1124 11:46:21.685903 4789 generic.go:334] "Generic (PLEG): container finished" podID="5d8dfa37-0258-4fa8-814f-52c167e55e9c" containerID="9133667257b57c7d071afe53d34c96c371437c3bd80b52d8e60bc9be1d6da32d" exitCode=0 Nov 24 11:46:21 crc kubenswrapper[4789]: I1124 11:46:21.685957 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vplq8" event={"ID":"5d8dfa37-0258-4fa8-814f-52c167e55e9c","Type":"ContainerDied","Data":"9133667257b57c7d071afe53d34c96c371437c3bd80b52d8e60bc9be1d6da32d"} Nov 24 11:46:21 crc kubenswrapper[4789]: I1124 11:46:21.688063 4789 generic.go:334] "Generic (PLEG): container finished" podID="85dfb49e-554c-415f-9add-67bb02165386" containerID="7d3e6d13861a4724fa638f14c558d0f5c9a2c2dbb59ba3482c94312b453716c3" exitCode=0 Nov 24 11:46:21 crc kubenswrapper[4789]: I1124 11:46:21.688109 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3375-account-create-nntd8" event={"ID":"85dfb49e-554c-415f-9add-67bb02165386","Type":"ContainerDied","Data":"7d3e6d13861a4724fa638f14c558d0f5c9a2c2dbb59ba3482c94312b453716c3"} Nov 24 11:46:21 crc kubenswrapper[4789]: I1124 11:46:21.700599 4789 generic.go:334] "Generic (PLEG): container finished" podID="91e382fe-d85a-44e5-8047-e3ddad1a85f4" containerID="98ec19b78e1773cd12a2bce81079e889c24c3b45919fad2faebb2c5d7093a893" exitCode=0 Nov 24 11:46:21 crc kubenswrapper[4789]: I1124 11:46:21.700816 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-4176-account-create-6mldw" event={"ID":"91e382fe-d85a-44e5-8047-e3ddad1a85f4","Type":"ContainerDied","Data":"98ec19b78e1773cd12a2bce81079e889c24c3b45919fad2faebb2c5d7093a893"} Nov 24 11:46:21 crc kubenswrapper[4789]: I1124 11:46:21.700842 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-4176-account-create-6mldw" event={"ID":"91e382fe-d85a-44e5-8047-e3ddad1a85f4","Type":"ContainerStarted","Data":"2e232f13c17bc2d791c58d366dae796d0a1bf5e4b208c5884f6a387efc095296"} Nov 24 11:46:21 crc kubenswrapper[4789]: I1124 11:46:21.708175 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-vplq8" podStartSLOduration=2.708154981 podStartE2EDuration="2.708154981s" podCreationTimestamp="2025-11-24 11:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:46:20.722824365 +0000 UTC m=+963.305295744" watchObservedRunningTime="2025-11-24 11:46:21.708154981 +0000 UTC m=+964.290626360" Nov 24 11:46:22 crc kubenswrapper[4789]: I1124 11:46:22.121797 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mzj6x" Nov 24 11:46:22 crc kubenswrapper[4789]: I1124 11:46:22.272540 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dc2mk" Nov 24 11:46:22 crc kubenswrapper[4789]: I1124 11:46:22.284422 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a523a3ed-09c8-4752-8b89-562cbb1c80c1-operator-scripts\") pod \"a523a3ed-09c8-4752-8b89-562cbb1c80c1\" (UID: \"a523a3ed-09c8-4752-8b89-562cbb1c80c1\") " Nov 24 11:46:22 crc kubenswrapper[4789]: I1124 11:46:22.285618 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a523a3ed-09c8-4752-8b89-562cbb1c80c1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a523a3ed-09c8-4752-8b89-562cbb1c80c1" (UID: "a523a3ed-09c8-4752-8b89-562cbb1c80c1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:46:22 crc kubenswrapper[4789]: I1124 11:46:22.285740 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsgjp\" (UniqueName: \"kubernetes.io/projected/a523a3ed-09c8-4752-8b89-562cbb1c80c1-kube-api-access-fsgjp\") pod \"a523a3ed-09c8-4752-8b89-562cbb1c80c1\" (UID: \"a523a3ed-09c8-4752-8b89-562cbb1c80c1\") " Nov 24 11:46:22 crc kubenswrapper[4789]: I1124 11:46:22.286211 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a523a3ed-09c8-4752-8b89-562cbb1c80c1-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:22 crc kubenswrapper[4789]: I1124 11:46:22.292426 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a523a3ed-09c8-4752-8b89-562cbb1c80c1-kube-api-access-fsgjp" (OuterVolumeSpecName: "kube-api-access-fsgjp") pod "a523a3ed-09c8-4752-8b89-562cbb1c80c1" (UID: "a523a3ed-09c8-4752-8b89-562cbb1c80c1"). InnerVolumeSpecName "kube-api-access-fsgjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:46:22 crc kubenswrapper[4789]: I1124 11:46:22.388303 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20a4d7a1-39fa-4ab6-add9-7258bb865809-operator-scripts\") pod \"20a4d7a1-39fa-4ab6-add9-7258bb865809\" (UID: \"20a4d7a1-39fa-4ab6-add9-7258bb865809\") " Nov 24 11:46:22 crc kubenswrapper[4789]: I1124 11:46:22.388533 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94jhd\" (UniqueName: \"kubernetes.io/projected/20a4d7a1-39fa-4ab6-add9-7258bb865809-kube-api-access-94jhd\") pod \"20a4d7a1-39fa-4ab6-add9-7258bb865809\" (UID: \"20a4d7a1-39fa-4ab6-add9-7258bb865809\") " Nov 24 11:46:22 crc kubenswrapper[4789]: I1124 11:46:22.388839 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsgjp\" (UniqueName: \"kubernetes.io/projected/a523a3ed-09c8-4752-8b89-562cbb1c80c1-kube-api-access-fsgjp\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:22 crc kubenswrapper[4789]: I1124 11:46:22.389800 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20a4d7a1-39fa-4ab6-add9-7258bb865809-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "20a4d7a1-39fa-4ab6-add9-7258bb865809" (UID: "20a4d7a1-39fa-4ab6-add9-7258bb865809"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:46:22 crc kubenswrapper[4789]: I1124 11:46:22.392030 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20a4d7a1-39fa-4ab6-add9-7258bb865809-kube-api-access-94jhd" (OuterVolumeSpecName: "kube-api-access-94jhd") pod "20a4d7a1-39fa-4ab6-add9-7258bb865809" (UID: "20a4d7a1-39fa-4ab6-add9-7258bb865809"). InnerVolumeSpecName "kube-api-access-94jhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:46:22 crc kubenswrapper[4789]: I1124 11:46:22.435481 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-554567b4f7-2mcd8" Nov 24 11:46:22 crc kubenswrapper[4789]: I1124 11:46:22.480561 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-8rnc2"] Nov 24 11:46:22 crc kubenswrapper[4789]: I1124 11:46:22.480785 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-8rnc2" podUID="0cf50200-0128-4de2-a057-658b021fd401" containerName="dnsmasq-dns" containerID="cri-o://fa79289c7da33f582d47d841cae8f700ae9437f94870f31f5e9be1a732de90a8" gracePeriod=10 Nov 24 11:46:22 crc kubenswrapper[4789]: I1124 11:46:22.490364 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20a4d7a1-39fa-4ab6-add9-7258bb865809-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:22 crc kubenswrapper[4789]: I1124 11:46:22.490404 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94jhd\" (UniqueName: \"kubernetes.io/projected/20a4d7a1-39fa-4ab6-add9-7258bb865809-kube-api-access-94jhd\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:22 crc kubenswrapper[4789]: I1124 11:46:22.721392 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-dc2mk" event={"ID":"20a4d7a1-39fa-4ab6-add9-7258bb865809","Type":"ContainerDied","Data":"c88c95b99c32951f2c1e912cf55e03f888279b990ad0d4ad520d28c955525d02"} Nov 24 11:46:22 crc kubenswrapper[4789]: I1124 11:46:22.721433 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c88c95b99c32951f2c1e912cf55e03f888279b990ad0d4ad520d28c955525d02" Nov 24 11:46:22 crc kubenswrapper[4789]: I1124 11:46:22.721514 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dc2mk" Nov 24 11:46:22 crc kubenswrapper[4789]: I1124 11:46:22.726042 4789 generic.go:334] "Generic (PLEG): container finished" podID="0cf50200-0128-4de2-a057-658b021fd401" containerID="fa79289c7da33f582d47d841cae8f700ae9437f94870f31f5e9be1a732de90a8" exitCode=0 Nov 24 11:46:22 crc kubenswrapper[4789]: I1124 11:46:22.726095 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-8rnc2" event={"ID":"0cf50200-0128-4de2-a057-658b021fd401","Type":"ContainerDied","Data":"fa79289c7da33f582d47d841cae8f700ae9437f94870f31f5e9be1a732de90a8"} Nov 24 11:46:22 crc kubenswrapper[4789]: I1124 11:46:22.729011 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mzj6x" Nov 24 11:46:22 crc kubenswrapper[4789]: I1124 11:46:22.730711 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-mzj6x" event={"ID":"a523a3ed-09c8-4752-8b89-562cbb1c80c1","Type":"ContainerDied","Data":"cf4e58789b0a54cd36c83b6c04de6db06fc566803e118b9d7661b385e1ef3fb1"} Nov 24 11:46:22 crc kubenswrapper[4789]: I1124 11:46:22.730777 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf4e58789b0a54cd36c83b6c04de6db06fc566803e118b9d7661b385e1ef3fb1" Nov 24 11:46:22 crc kubenswrapper[4789]: I1124 11:46:22.886430 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-8rnc2" Nov 24 11:46:22 crc kubenswrapper[4789]: I1124 11:46:22.998118 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0cf50200-0128-4de2-a057-658b021fd401-ovsdbserver-sb\") pod \"0cf50200-0128-4de2-a057-658b021fd401\" (UID: \"0cf50200-0128-4de2-a057-658b021fd401\") " Nov 24 11:46:22 crc kubenswrapper[4789]: I1124 11:46:22.998208 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0cf50200-0128-4de2-a057-658b021fd401-ovsdbserver-nb\") pod \"0cf50200-0128-4de2-a057-658b021fd401\" (UID: \"0cf50200-0128-4de2-a057-658b021fd401\") " Nov 24 11:46:22 crc kubenswrapper[4789]: I1124 11:46:22.998313 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfmhb\" (UniqueName: \"kubernetes.io/projected/0cf50200-0128-4de2-a057-658b021fd401-kube-api-access-sfmhb\") pod \"0cf50200-0128-4de2-a057-658b021fd401\" (UID: \"0cf50200-0128-4de2-a057-658b021fd401\") " Nov 24 11:46:22 crc kubenswrapper[4789]: I1124 11:46:22.998334 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cf50200-0128-4de2-a057-658b021fd401-config\") pod \"0cf50200-0128-4de2-a057-658b021fd401\" (UID: \"0cf50200-0128-4de2-a057-658b021fd401\") " Nov 24 11:46:22 crc kubenswrapper[4789]: I1124 11:46:22.998368 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0cf50200-0128-4de2-a057-658b021fd401-dns-svc\") pod \"0cf50200-0128-4de2-a057-658b021fd401\" (UID: \"0cf50200-0128-4de2-a057-658b021fd401\") " Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.043364 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cf50200-0128-4de2-a057-658b021fd401-kube-api-access-sfmhb" (OuterVolumeSpecName: "kube-api-access-sfmhb") pod "0cf50200-0128-4de2-a057-658b021fd401" (UID: "0cf50200-0128-4de2-a057-658b021fd401"). InnerVolumeSpecName "kube-api-access-sfmhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.070368 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cf50200-0128-4de2-a057-658b021fd401-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0cf50200-0128-4de2-a057-658b021fd401" (UID: "0cf50200-0128-4de2-a057-658b021fd401"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.077154 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cf50200-0128-4de2-a057-658b021fd401-config" (OuterVolumeSpecName: "config") pod "0cf50200-0128-4de2-a057-658b021fd401" (UID: "0cf50200-0128-4de2-a057-658b021fd401"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.079450 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cf50200-0128-4de2-a057-658b021fd401-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0cf50200-0128-4de2-a057-658b021fd401" (UID: "0cf50200-0128-4de2-a057-658b021fd401"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.100182 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfmhb\" (UniqueName: \"kubernetes.io/projected/0cf50200-0128-4de2-a057-658b021fd401-kube-api-access-sfmhb\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.102139 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cf50200-0128-4de2-a057-658b021fd401-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.102204 4789 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0cf50200-0128-4de2-a057-658b021fd401-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.102295 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0cf50200-0128-4de2-a057-658b021fd401-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.149196 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cf50200-0128-4de2-a057-658b021fd401-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0cf50200-0128-4de2-a057-658b021fd401" (UID: "0cf50200-0128-4de2-a057-658b021fd401"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.205957 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0cf50200-0128-4de2-a057-658b021fd401-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.222643 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4176-account-create-6mldw" Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.271990 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3375-account-create-nntd8" Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.275816 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vplq8" Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.283162 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9ffd-account-create-k89vc" Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.316947 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91e382fe-d85a-44e5-8047-e3ddad1a85f4-operator-scripts\") pod \"91e382fe-d85a-44e5-8047-e3ddad1a85f4\" (UID: \"91e382fe-d85a-44e5-8047-e3ddad1a85f4\") " Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.317130 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nn6v5\" (UniqueName: \"kubernetes.io/projected/91e382fe-d85a-44e5-8047-e3ddad1a85f4-kube-api-access-nn6v5\") pod \"91e382fe-d85a-44e5-8047-e3ddad1a85f4\" (UID: \"91e382fe-d85a-44e5-8047-e3ddad1a85f4\") " Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.318641 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91e382fe-d85a-44e5-8047-e3ddad1a85f4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "91e382fe-d85a-44e5-8047-e3ddad1a85f4" (UID: "91e382fe-d85a-44e5-8047-e3ddad1a85f4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.327691 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91e382fe-d85a-44e5-8047-e3ddad1a85f4-kube-api-access-nn6v5" (OuterVolumeSpecName: "kube-api-access-nn6v5") pod "91e382fe-d85a-44e5-8047-e3ddad1a85f4" (UID: "91e382fe-d85a-44e5-8047-e3ddad1a85f4"). InnerVolumeSpecName "kube-api-access-nn6v5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.418116 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44mdf\" (UniqueName: \"kubernetes.io/projected/6bba6a0a-259f-4a74-850e-2025f99757e6-kube-api-access-44mdf\") pod \"6bba6a0a-259f-4a74-850e-2025f99757e6\" (UID: \"6bba6a0a-259f-4a74-850e-2025f99757e6\") " Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.418179 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n74rn\" (UniqueName: \"kubernetes.io/projected/85dfb49e-554c-415f-9add-67bb02165386-kube-api-access-n74rn\") pod \"85dfb49e-554c-415f-9add-67bb02165386\" (UID: \"85dfb49e-554c-415f-9add-67bb02165386\") " Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.418247 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85dfb49e-554c-415f-9add-67bb02165386-operator-scripts\") pod \"85dfb49e-554c-415f-9add-67bb02165386\" (UID: \"85dfb49e-554c-415f-9add-67bb02165386\") " Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.418303 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bba6a0a-259f-4a74-850e-2025f99757e6-operator-scripts\") pod \"6bba6a0a-259f-4a74-850e-2025f99757e6\" (UID: \"6bba6a0a-259f-4a74-850e-2025f99757e6\") " Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.418329 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d8dfa37-0258-4fa8-814f-52c167e55e9c-operator-scripts\") pod \"5d8dfa37-0258-4fa8-814f-52c167e55e9c\" (UID: \"5d8dfa37-0258-4fa8-814f-52c167e55e9c\") " Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.418393 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tv468\" (UniqueName: \"kubernetes.io/projected/5d8dfa37-0258-4fa8-814f-52c167e55e9c-kube-api-access-tv468\") pod \"5d8dfa37-0258-4fa8-814f-52c167e55e9c\" (UID: \"5d8dfa37-0258-4fa8-814f-52c167e55e9c\") " Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.418674 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nn6v5\" (UniqueName: \"kubernetes.io/projected/91e382fe-d85a-44e5-8047-e3ddad1a85f4-kube-api-access-nn6v5\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.418691 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91e382fe-d85a-44e5-8047-e3ddad1a85f4-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.418946 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85dfb49e-554c-415f-9add-67bb02165386-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "85dfb49e-554c-415f-9add-67bb02165386" (UID: "85dfb49e-554c-415f-9add-67bb02165386"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.419016 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bba6a0a-259f-4a74-850e-2025f99757e6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6bba6a0a-259f-4a74-850e-2025f99757e6" (UID: "6bba6a0a-259f-4a74-850e-2025f99757e6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.419414 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d8dfa37-0258-4fa8-814f-52c167e55e9c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5d8dfa37-0258-4fa8-814f-52c167e55e9c" (UID: "5d8dfa37-0258-4fa8-814f-52c167e55e9c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.420897 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bba6a0a-259f-4a74-850e-2025f99757e6-kube-api-access-44mdf" (OuterVolumeSpecName: "kube-api-access-44mdf") pod "6bba6a0a-259f-4a74-850e-2025f99757e6" (UID: "6bba6a0a-259f-4a74-850e-2025f99757e6"). InnerVolumeSpecName "kube-api-access-44mdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.421123 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85dfb49e-554c-415f-9add-67bb02165386-kube-api-access-n74rn" (OuterVolumeSpecName: "kube-api-access-n74rn") pod "85dfb49e-554c-415f-9add-67bb02165386" (UID: "85dfb49e-554c-415f-9add-67bb02165386"). InnerVolumeSpecName "kube-api-access-n74rn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.422332 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d8dfa37-0258-4fa8-814f-52c167e55e9c-kube-api-access-tv468" (OuterVolumeSpecName: "kube-api-access-tv468") pod "5d8dfa37-0258-4fa8-814f-52c167e55e9c" (UID: "5d8dfa37-0258-4fa8-814f-52c167e55e9c"). InnerVolumeSpecName "kube-api-access-tv468". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.519902 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n74rn\" (UniqueName: \"kubernetes.io/projected/85dfb49e-554c-415f-9add-67bb02165386-kube-api-access-n74rn\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.519947 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85dfb49e-554c-415f-9add-67bb02165386-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.519961 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bba6a0a-259f-4a74-850e-2025f99757e6-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.519969 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d8dfa37-0258-4fa8-814f-52c167e55e9c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.519979 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tv468\" (UniqueName: \"kubernetes.io/projected/5d8dfa37-0258-4fa8-814f-52c167e55e9c-kube-api-access-tv468\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.519989 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44mdf\" (UniqueName: \"kubernetes.io/projected/6bba6a0a-259f-4a74-850e-2025f99757e6-kube-api-access-44mdf\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.737378 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9ffd-account-create-k89vc" event={"ID":"6bba6a0a-259f-4a74-850e-2025f99757e6","Type":"ContainerDied","Data":"f881dc18f34606e943f094c22ca2be6595c8ffce4d8bbce37f93674c84e8e12c"} Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.737415 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f881dc18f34606e943f094c22ca2be6595c8ffce4d8bbce37f93674c84e8e12c" Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.737498 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9ffd-account-create-k89vc" Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.753288 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vplq8" event={"ID":"5d8dfa37-0258-4fa8-814f-52c167e55e9c","Type":"ContainerDied","Data":"3a5dfbc83311d983cf459b780cc5e87af4084f068dc13f852007dfa4c1ad628b"} Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.753327 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a5dfbc83311d983cf459b780cc5e87af4084f068dc13f852007dfa4c1ad628b" Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.753378 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vplq8" Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.765554 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3375-account-create-nntd8" event={"ID":"85dfb49e-554c-415f-9add-67bb02165386","Type":"ContainerDied","Data":"564b41ce1a4f882821b62e86a9d655314c215b49a5ad6f7cbe28b58a3d6a3baa"} Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.765604 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="564b41ce1a4f882821b62e86a9d655314c215b49a5ad6f7cbe28b58a3d6a3baa" Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.765570 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3375-account-create-nntd8" Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.768091 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-4176-account-create-6mldw" event={"ID":"91e382fe-d85a-44e5-8047-e3ddad1a85f4","Type":"ContainerDied","Data":"2e232f13c17bc2d791c58d366dae796d0a1bf5e4b208c5884f6a387efc095296"} Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.768132 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e232f13c17bc2d791c58d366dae796d0a1bf5e4b208c5884f6a387efc095296" Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.768185 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4176-account-create-6mldw" Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.772821 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-8rnc2" event={"ID":"0cf50200-0128-4de2-a057-658b021fd401","Type":"ContainerDied","Data":"7c0d3c1d786654430f6ea918cbd0183b31a2975de8ffa7979243f1b8c8266a63"} Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.772872 4789 scope.go:117] "RemoveContainer" containerID="fa79289c7da33f582d47d841cae8f700ae9437f94870f31f5e9be1a732de90a8" Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.773028 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-8rnc2" Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.825240 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-8rnc2"] Nov 24 11:46:23 crc kubenswrapper[4789]: I1124 11:46:23.836151 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-8rnc2"] Nov 24 11:46:24 crc kubenswrapper[4789]: I1124 11:46:24.184641 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cf50200-0128-4de2-a057-658b021fd401" path="/var/lib/kubelet/pods/0cf50200-0128-4de2-a057-658b021fd401/volumes" Nov 24 11:46:26 crc kubenswrapper[4789]: I1124 11:46:26.631693 4789 scope.go:117] "RemoveContainer" containerID="e8b8c5f12ce742c6a39cd760b2d674767a25f3d1b5575f382d97e63511f94cda" Nov 24 11:46:27 crc kubenswrapper[4789]: I1124 11:46:27.817611 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rmhhs" event={"ID":"46218063-8c0c-4d2a-9693-1ee25e647520","Type":"ContainerStarted","Data":"894d5d19f0675d2aa6b24eb3551b3228fc1b6e0ec8f2c0a157e6a030fdd128d8"} Nov 24 11:46:27 crc kubenswrapper[4789]: I1124 11:46:27.844448 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-rmhhs" podStartSLOduration=2.674215668 podStartE2EDuration="8.844428483s" podCreationTimestamp="2025-11-24 11:46:19 +0000 UTC" firstStartedPulling="2025-11-24 11:46:20.544846956 +0000 UTC m=+963.127318325" lastFinishedPulling="2025-11-24 11:46:26.715059751 +0000 UTC m=+969.297531140" observedRunningTime="2025-11-24 11:46:27.83602954 +0000 UTC m=+970.418500959" watchObservedRunningTime="2025-11-24 11:46:27.844428483 +0000 UTC m=+970.426899872" Nov 24 11:46:29 crc kubenswrapper[4789]: I1124 11:46:29.839358 4789 generic.go:334] "Generic (PLEG): container finished" podID="46218063-8c0c-4d2a-9693-1ee25e647520" containerID="894d5d19f0675d2aa6b24eb3551b3228fc1b6e0ec8f2c0a157e6a030fdd128d8" exitCode=0 Nov 24 11:46:29 crc kubenswrapper[4789]: I1124 11:46:29.839589 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rmhhs" event={"ID":"46218063-8c0c-4d2a-9693-1ee25e647520","Type":"ContainerDied","Data":"894d5d19f0675d2aa6b24eb3551b3228fc1b6e0ec8f2c0a157e6a030fdd128d8"} Nov 24 11:46:31 crc kubenswrapper[4789]: I1124 11:46:31.198034 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rmhhs" Nov 24 11:46:31 crc kubenswrapper[4789]: I1124 11:46:31.365122 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46218063-8c0c-4d2a-9693-1ee25e647520-combined-ca-bundle\") pod \"46218063-8c0c-4d2a-9693-1ee25e647520\" (UID: \"46218063-8c0c-4d2a-9693-1ee25e647520\") " Nov 24 11:46:31 crc kubenswrapper[4789]: I1124 11:46:31.365281 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46218063-8c0c-4d2a-9693-1ee25e647520-config-data\") pod \"46218063-8c0c-4d2a-9693-1ee25e647520\" (UID: \"46218063-8c0c-4d2a-9693-1ee25e647520\") " Nov 24 11:46:31 crc kubenswrapper[4789]: I1124 11:46:31.365309 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fz8zw\" (UniqueName: \"kubernetes.io/projected/46218063-8c0c-4d2a-9693-1ee25e647520-kube-api-access-fz8zw\") pod \"46218063-8c0c-4d2a-9693-1ee25e647520\" (UID: \"46218063-8c0c-4d2a-9693-1ee25e647520\") " Nov 24 11:46:31 crc kubenswrapper[4789]: I1124 11:46:31.376940 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46218063-8c0c-4d2a-9693-1ee25e647520-kube-api-access-fz8zw" (OuterVolumeSpecName: "kube-api-access-fz8zw") pod "46218063-8c0c-4d2a-9693-1ee25e647520" (UID: "46218063-8c0c-4d2a-9693-1ee25e647520"). InnerVolumeSpecName "kube-api-access-fz8zw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:46:31 crc kubenswrapper[4789]: I1124 11:46:31.390565 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46218063-8c0c-4d2a-9693-1ee25e647520-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46218063-8c0c-4d2a-9693-1ee25e647520" (UID: "46218063-8c0c-4d2a-9693-1ee25e647520"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:46:31 crc kubenswrapper[4789]: I1124 11:46:31.410553 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46218063-8c0c-4d2a-9693-1ee25e647520-config-data" (OuterVolumeSpecName: "config-data") pod "46218063-8c0c-4d2a-9693-1ee25e647520" (UID: "46218063-8c0c-4d2a-9693-1ee25e647520"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:46:31 crc kubenswrapper[4789]: I1124 11:46:31.467660 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46218063-8c0c-4d2a-9693-1ee25e647520-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:31 crc kubenswrapper[4789]: I1124 11:46:31.467798 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46218063-8c0c-4d2a-9693-1ee25e647520-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:31 crc kubenswrapper[4789]: I1124 11:46:31.467861 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fz8zw\" (UniqueName: \"kubernetes.io/projected/46218063-8c0c-4d2a-9693-1ee25e647520-kube-api-access-fz8zw\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:31 crc kubenswrapper[4789]: I1124 11:46:31.888015 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rmhhs" event={"ID":"46218063-8c0c-4d2a-9693-1ee25e647520","Type":"ContainerDied","Data":"7bb423bfb4288cf9e4fb18b194b89f50e2e719bbf695ff4eeab349049b87ba65"} Nov 24 11:46:31 crc kubenswrapper[4789]: I1124 11:46:31.888772 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bb423bfb4288cf9e4fb18b194b89f50e2e719bbf695ff4eeab349049b87ba65" Nov 24 11:46:31 crc kubenswrapper[4789]: I1124 11:46:31.888095 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rmhhs" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.193609 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67795cd9-4tz9j"] Nov 24 11:46:32 crc kubenswrapper[4789]: E1124 11:46:32.194226 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85dfb49e-554c-415f-9add-67bb02165386" containerName="mariadb-account-create" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.194256 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="85dfb49e-554c-415f-9add-67bb02165386" containerName="mariadb-account-create" Nov 24 11:46:32 crc kubenswrapper[4789]: E1124 11:46:32.194271 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf50200-0128-4de2-a057-658b021fd401" containerName="init" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.194279 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf50200-0128-4de2-a057-658b021fd401" containerName="init" Nov 24 11:46:32 crc kubenswrapper[4789]: E1124 11:46:32.194299 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a523a3ed-09c8-4752-8b89-562cbb1c80c1" containerName="mariadb-database-create" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.194307 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="a523a3ed-09c8-4752-8b89-562cbb1c80c1" containerName="mariadb-database-create" Nov 24 11:46:32 crc kubenswrapper[4789]: E1124 11:46:32.194327 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf50200-0128-4de2-a057-658b021fd401" containerName="dnsmasq-dns" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.194335 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf50200-0128-4de2-a057-658b021fd401" containerName="dnsmasq-dns" Nov 24 11:46:32 crc kubenswrapper[4789]: E1124 11:46:32.194353 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91e382fe-d85a-44e5-8047-e3ddad1a85f4" containerName="mariadb-account-create" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.194362 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="91e382fe-d85a-44e5-8047-e3ddad1a85f4" containerName="mariadb-account-create" Nov 24 11:46:32 crc kubenswrapper[4789]: E1124 11:46:32.194379 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20a4d7a1-39fa-4ab6-add9-7258bb865809" containerName="mariadb-database-create" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.194387 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="20a4d7a1-39fa-4ab6-add9-7258bb865809" containerName="mariadb-database-create" Nov 24 11:46:32 crc kubenswrapper[4789]: E1124 11:46:32.194396 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bba6a0a-259f-4a74-850e-2025f99757e6" containerName="mariadb-account-create" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.194403 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bba6a0a-259f-4a74-850e-2025f99757e6" containerName="mariadb-account-create" Nov 24 11:46:32 crc kubenswrapper[4789]: E1124 11:46:32.194412 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d8dfa37-0258-4fa8-814f-52c167e55e9c" containerName="mariadb-database-create" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.194420 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d8dfa37-0258-4fa8-814f-52c167e55e9c" containerName="mariadb-database-create" Nov 24 11:46:32 crc kubenswrapper[4789]: E1124 11:46:32.194430 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46218063-8c0c-4d2a-9693-1ee25e647520" containerName="keystone-db-sync" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.194438 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="46218063-8c0c-4d2a-9693-1ee25e647520" containerName="keystone-db-sync" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.194667 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="91e382fe-d85a-44e5-8047-e3ddad1a85f4" containerName="mariadb-account-create" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.194685 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d8dfa37-0258-4fa8-814f-52c167e55e9c" containerName="mariadb-database-create" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.194711 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bba6a0a-259f-4a74-850e-2025f99757e6" containerName="mariadb-account-create" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.194732 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cf50200-0128-4de2-a057-658b021fd401" containerName="dnsmasq-dns" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.194743 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="20a4d7a1-39fa-4ab6-add9-7258bb865809" containerName="mariadb-database-create" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.194764 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="46218063-8c0c-4d2a-9693-1ee25e647520" containerName="keystone-db-sync" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.194784 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="85dfb49e-554c-415f-9add-67bb02165386" containerName="mariadb-account-create" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.194796 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="a523a3ed-09c8-4752-8b89-562cbb1c80c1" containerName="mariadb-database-create" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.202909 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-4tz9j" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.228315 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-4tz9j"] Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.237145 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-gvrrj"] Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.238316 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gvrrj" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.246416 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-gvrrj"] Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.250241 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.250419 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gpqd2" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.250576 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.250735 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.250919 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.285643 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1cea940-ff88-4c80-98fb-548eef2631e1-config\") pod \"dnsmasq-dns-67795cd9-4tz9j\" (UID: \"b1cea940-ff88-4c80-98fb-548eef2631e1\") " pod="openstack/dnsmasq-dns-67795cd9-4tz9j" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.285738 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1cea940-ff88-4c80-98fb-548eef2631e1-dns-svc\") pod \"dnsmasq-dns-67795cd9-4tz9j\" (UID: \"b1cea940-ff88-4c80-98fb-548eef2631e1\") " pod="openstack/dnsmasq-dns-67795cd9-4tz9j" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.285759 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1cea940-ff88-4c80-98fb-548eef2631e1-ovsdbserver-sb\") pod \"dnsmasq-dns-67795cd9-4tz9j\" (UID: \"b1cea940-ff88-4c80-98fb-548eef2631e1\") " pod="openstack/dnsmasq-dns-67795cd9-4tz9j" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.285776 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1cea940-ff88-4c80-98fb-548eef2631e1-ovsdbserver-nb\") pod \"dnsmasq-dns-67795cd9-4tz9j\" (UID: \"b1cea940-ff88-4c80-98fb-548eef2631e1\") " pod="openstack/dnsmasq-dns-67795cd9-4tz9j" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.285810 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgvzm\" (UniqueName: \"kubernetes.io/projected/b1cea940-ff88-4c80-98fb-548eef2631e1-kube-api-access-pgvzm\") pod \"dnsmasq-dns-67795cd9-4tz9j\" (UID: \"b1cea940-ff88-4c80-98fb-548eef2631e1\") " pod="openstack/dnsmasq-dns-67795cd9-4tz9j" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.387832 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d057fecf-b22d-4304-9ce4-4fbbd358ecc5-config-data\") pod \"keystone-bootstrap-gvrrj\" (UID: \"d057fecf-b22d-4304-9ce4-4fbbd358ecc5\") " pod="openstack/keystone-bootstrap-gvrrj" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.387906 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d057fecf-b22d-4304-9ce4-4fbbd358ecc5-fernet-keys\") pod \"keystone-bootstrap-gvrrj\" (UID: \"d057fecf-b22d-4304-9ce4-4fbbd358ecc5\") " pod="openstack/keystone-bootstrap-gvrrj" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.387945 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1cea940-ff88-4c80-98fb-548eef2631e1-dns-svc\") pod \"dnsmasq-dns-67795cd9-4tz9j\" (UID: \"b1cea940-ff88-4c80-98fb-548eef2631e1\") " pod="openstack/dnsmasq-dns-67795cd9-4tz9j" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.387970 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1cea940-ff88-4c80-98fb-548eef2631e1-ovsdbserver-sb\") pod \"dnsmasq-dns-67795cd9-4tz9j\" (UID: \"b1cea940-ff88-4c80-98fb-548eef2631e1\") " pod="openstack/dnsmasq-dns-67795cd9-4tz9j" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.387995 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1cea940-ff88-4c80-98fb-548eef2631e1-ovsdbserver-nb\") pod \"dnsmasq-dns-67795cd9-4tz9j\" (UID: \"b1cea940-ff88-4c80-98fb-548eef2631e1\") " pod="openstack/dnsmasq-dns-67795cd9-4tz9j" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.388025 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zglrl\" (UniqueName: \"kubernetes.io/projected/d057fecf-b22d-4304-9ce4-4fbbd358ecc5-kube-api-access-zglrl\") pod \"keystone-bootstrap-gvrrj\" (UID: \"d057fecf-b22d-4304-9ce4-4fbbd358ecc5\") " pod="openstack/keystone-bootstrap-gvrrj" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.388068 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgvzm\" (UniqueName: \"kubernetes.io/projected/b1cea940-ff88-4c80-98fb-548eef2631e1-kube-api-access-pgvzm\") pod \"dnsmasq-dns-67795cd9-4tz9j\" (UID: \"b1cea940-ff88-4c80-98fb-548eef2631e1\") " pod="openstack/dnsmasq-dns-67795cd9-4tz9j" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.388130 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1cea940-ff88-4c80-98fb-548eef2631e1-config\") pod \"dnsmasq-dns-67795cd9-4tz9j\" (UID: \"b1cea940-ff88-4c80-98fb-548eef2631e1\") " pod="openstack/dnsmasq-dns-67795cd9-4tz9j" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.388159 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d057fecf-b22d-4304-9ce4-4fbbd358ecc5-credential-keys\") pod \"keystone-bootstrap-gvrrj\" (UID: \"d057fecf-b22d-4304-9ce4-4fbbd358ecc5\") " pod="openstack/keystone-bootstrap-gvrrj" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.388186 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d057fecf-b22d-4304-9ce4-4fbbd358ecc5-combined-ca-bundle\") pod \"keystone-bootstrap-gvrrj\" (UID: \"d057fecf-b22d-4304-9ce4-4fbbd358ecc5\") " pod="openstack/keystone-bootstrap-gvrrj" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.388212 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d057fecf-b22d-4304-9ce4-4fbbd358ecc5-scripts\") pod \"keystone-bootstrap-gvrrj\" (UID: \"d057fecf-b22d-4304-9ce4-4fbbd358ecc5\") " pod="openstack/keystone-bootstrap-gvrrj" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.389264 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1cea940-ff88-4c80-98fb-548eef2631e1-dns-svc\") pod \"dnsmasq-dns-67795cd9-4tz9j\" (UID: \"b1cea940-ff88-4c80-98fb-548eef2631e1\") " pod="openstack/dnsmasq-dns-67795cd9-4tz9j" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.389690 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1cea940-ff88-4c80-98fb-548eef2631e1-ovsdbserver-nb\") pod \"dnsmasq-dns-67795cd9-4tz9j\" (UID: \"b1cea940-ff88-4c80-98fb-548eef2631e1\") " pod="openstack/dnsmasq-dns-67795cd9-4tz9j" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.390041 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1cea940-ff88-4c80-98fb-548eef2631e1-ovsdbserver-sb\") pod \"dnsmasq-dns-67795cd9-4tz9j\" (UID: \"b1cea940-ff88-4c80-98fb-548eef2631e1\") " pod="openstack/dnsmasq-dns-67795cd9-4tz9j" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.404842 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1cea940-ff88-4c80-98fb-548eef2631e1-config\") pod \"dnsmasq-dns-67795cd9-4tz9j\" (UID: \"b1cea940-ff88-4c80-98fb-548eef2631e1\") " pod="openstack/dnsmasq-dns-67795cd9-4tz9j" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.415113 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgvzm\" (UniqueName: \"kubernetes.io/projected/b1cea940-ff88-4c80-98fb-548eef2631e1-kube-api-access-pgvzm\") pod \"dnsmasq-dns-67795cd9-4tz9j\" (UID: \"b1cea940-ff88-4c80-98fb-548eef2631e1\") " pod="openstack/dnsmasq-dns-67795cd9-4tz9j" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.422508 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-msb22"] Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.423531 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-msb22" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.436601 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-7smvg" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.436808 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.436916 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.444767 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-msb22"] Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.489556 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d057fecf-b22d-4304-9ce4-4fbbd358ecc5-fernet-keys\") pod \"keystone-bootstrap-gvrrj\" (UID: \"d057fecf-b22d-4304-9ce4-4fbbd358ecc5\") " pod="openstack/keystone-bootstrap-gvrrj" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.489619 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zglrl\" (UniqueName: \"kubernetes.io/projected/d057fecf-b22d-4304-9ce4-4fbbd358ecc5-kube-api-access-zglrl\") pod \"keystone-bootstrap-gvrrj\" (UID: \"d057fecf-b22d-4304-9ce4-4fbbd358ecc5\") " pod="openstack/keystone-bootstrap-gvrrj" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.489696 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d057fecf-b22d-4304-9ce4-4fbbd358ecc5-credential-keys\") pod \"keystone-bootstrap-gvrrj\" (UID: \"d057fecf-b22d-4304-9ce4-4fbbd358ecc5\") " pod="openstack/keystone-bootstrap-gvrrj" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.489720 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d057fecf-b22d-4304-9ce4-4fbbd358ecc5-combined-ca-bundle\") pod \"keystone-bootstrap-gvrrj\" (UID: \"d057fecf-b22d-4304-9ce4-4fbbd358ecc5\") " pod="openstack/keystone-bootstrap-gvrrj" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.489736 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d057fecf-b22d-4304-9ce4-4fbbd358ecc5-scripts\") pod \"keystone-bootstrap-gvrrj\" (UID: \"d057fecf-b22d-4304-9ce4-4fbbd358ecc5\") " pod="openstack/keystone-bootstrap-gvrrj" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.489766 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d057fecf-b22d-4304-9ce4-4fbbd358ecc5-config-data\") pod \"keystone-bootstrap-gvrrj\" (UID: \"d057fecf-b22d-4304-9ce4-4fbbd358ecc5\") " pod="openstack/keystone-bootstrap-gvrrj" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.504433 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d057fecf-b22d-4304-9ce4-4fbbd358ecc5-fernet-keys\") pod \"keystone-bootstrap-gvrrj\" (UID: \"d057fecf-b22d-4304-9ce4-4fbbd358ecc5\") " pod="openstack/keystone-bootstrap-gvrrj" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.511122 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d057fecf-b22d-4304-9ce4-4fbbd358ecc5-scripts\") pod \"keystone-bootstrap-gvrrj\" (UID: \"d057fecf-b22d-4304-9ce4-4fbbd358ecc5\") " pod="openstack/keystone-bootstrap-gvrrj" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.512912 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d057fecf-b22d-4304-9ce4-4fbbd358ecc5-combined-ca-bundle\") pod \"keystone-bootstrap-gvrrj\" (UID: \"d057fecf-b22d-4304-9ce4-4fbbd358ecc5\") " pod="openstack/keystone-bootstrap-gvrrj" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.516386 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d057fecf-b22d-4304-9ce4-4fbbd358ecc5-config-data\") pod \"keystone-bootstrap-gvrrj\" (UID: \"d057fecf-b22d-4304-9ce4-4fbbd358ecc5\") " pod="openstack/keystone-bootstrap-gvrrj" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.527825 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-4tz9j" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.531691 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d057fecf-b22d-4304-9ce4-4fbbd358ecc5-credential-keys\") pod \"keystone-bootstrap-gvrrj\" (UID: \"d057fecf-b22d-4304-9ce4-4fbbd358ecc5\") " pod="openstack/keystone-bootstrap-gvrrj" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.565195 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zglrl\" (UniqueName: \"kubernetes.io/projected/d057fecf-b22d-4304-9ce4-4fbbd358ecc5-kube-api-access-zglrl\") pod \"keystone-bootstrap-gvrrj\" (UID: \"d057fecf-b22d-4304-9ce4-4fbbd358ecc5\") " pod="openstack/keystone-bootstrap-gvrrj" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.577807 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gvrrj" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.594194 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2e41ad3b-8d25-49db-8c15-4a3a57f47e2f-db-sync-config-data\") pod \"cinder-db-sync-msb22\" (UID: \"2e41ad3b-8d25-49db-8c15-4a3a57f47e2f\") " pod="openstack/cinder-db-sync-msb22" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.594253 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e41ad3b-8d25-49db-8c15-4a3a57f47e2f-combined-ca-bundle\") pod \"cinder-db-sync-msb22\" (UID: \"2e41ad3b-8d25-49db-8c15-4a3a57f47e2f\") " pod="openstack/cinder-db-sync-msb22" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.594277 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e41ad3b-8d25-49db-8c15-4a3a57f47e2f-scripts\") pod \"cinder-db-sync-msb22\" (UID: \"2e41ad3b-8d25-49db-8c15-4a3a57f47e2f\") " pod="openstack/cinder-db-sync-msb22" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.594309 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2e41ad3b-8d25-49db-8c15-4a3a57f47e2f-etc-machine-id\") pod \"cinder-db-sync-msb22\" (UID: \"2e41ad3b-8d25-49db-8c15-4a3a57f47e2f\") " pod="openstack/cinder-db-sync-msb22" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.594340 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e41ad3b-8d25-49db-8c15-4a3a57f47e2f-config-data\") pod \"cinder-db-sync-msb22\" (UID: \"2e41ad3b-8d25-49db-8c15-4a3a57f47e2f\") " pod="openstack/cinder-db-sync-msb22" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.594377 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9mc8\" (UniqueName: \"kubernetes.io/projected/2e41ad3b-8d25-49db-8c15-4a3a57f47e2f-kube-api-access-s9mc8\") pod \"cinder-db-sync-msb22\" (UID: \"2e41ad3b-8d25-49db-8c15-4a3a57f47e2f\") " pod="openstack/cinder-db-sync-msb22" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.697832 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e41ad3b-8d25-49db-8c15-4a3a57f47e2f-config-data\") pod \"cinder-db-sync-msb22\" (UID: \"2e41ad3b-8d25-49db-8c15-4a3a57f47e2f\") " pod="openstack/cinder-db-sync-msb22" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.698400 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9mc8\" (UniqueName: \"kubernetes.io/projected/2e41ad3b-8d25-49db-8c15-4a3a57f47e2f-kube-api-access-s9mc8\") pod \"cinder-db-sync-msb22\" (UID: \"2e41ad3b-8d25-49db-8c15-4a3a57f47e2f\") " pod="openstack/cinder-db-sync-msb22" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.698481 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2e41ad3b-8d25-49db-8c15-4a3a57f47e2f-db-sync-config-data\") pod \"cinder-db-sync-msb22\" (UID: \"2e41ad3b-8d25-49db-8c15-4a3a57f47e2f\") " pod="openstack/cinder-db-sync-msb22" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.698516 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e41ad3b-8d25-49db-8c15-4a3a57f47e2f-combined-ca-bundle\") pod \"cinder-db-sync-msb22\" (UID: \"2e41ad3b-8d25-49db-8c15-4a3a57f47e2f\") " pod="openstack/cinder-db-sync-msb22" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.698558 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e41ad3b-8d25-49db-8c15-4a3a57f47e2f-scripts\") pod \"cinder-db-sync-msb22\" (UID: \"2e41ad3b-8d25-49db-8c15-4a3a57f47e2f\") " pod="openstack/cinder-db-sync-msb22" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.698593 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2e41ad3b-8d25-49db-8c15-4a3a57f47e2f-etc-machine-id\") pod \"cinder-db-sync-msb22\" (UID: \"2e41ad3b-8d25-49db-8c15-4a3a57f47e2f\") " pod="openstack/cinder-db-sync-msb22" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.698677 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2e41ad3b-8d25-49db-8c15-4a3a57f47e2f-etc-machine-id\") pod \"cinder-db-sync-msb22\" (UID: \"2e41ad3b-8d25-49db-8c15-4a3a57f47e2f\") " pod="openstack/cinder-db-sync-msb22" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.707573 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2e41ad3b-8d25-49db-8c15-4a3a57f47e2f-db-sync-config-data\") pod \"cinder-db-sync-msb22\" (UID: \"2e41ad3b-8d25-49db-8c15-4a3a57f47e2f\") " pod="openstack/cinder-db-sync-msb22" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.708418 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e41ad3b-8d25-49db-8c15-4a3a57f47e2f-scripts\") pod \"cinder-db-sync-msb22\" (UID: \"2e41ad3b-8d25-49db-8c15-4a3a57f47e2f\") " pod="openstack/cinder-db-sync-msb22" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.712341 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e41ad3b-8d25-49db-8c15-4a3a57f47e2f-combined-ca-bundle\") pod \"cinder-db-sync-msb22\" (UID: \"2e41ad3b-8d25-49db-8c15-4a3a57f47e2f\") " pod="openstack/cinder-db-sync-msb22" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.720437 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e41ad3b-8d25-49db-8c15-4a3a57f47e2f-config-data\") pod \"cinder-db-sync-msb22\" (UID: \"2e41ad3b-8d25-49db-8c15-4a3a57f47e2f\") " pod="openstack/cinder-db-sync-msb22" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.805930 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9mc8\" (UniqueName: \"kubernetes.io/projected/2e41ad3b-8d25-49db-8c15-4a3a57f47e2f-kube-api-access-s9mc8\") pod \"cinder-db-sync-msb22\" (UID: \"2e41ad3b-8d25-49db-8c15-4a3a57f47e2f\") " pod="openstack/cinder-db-sync-msb22" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.810434 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-mvgg8"] Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.811414 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-mvgg8" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.818491 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-zqvs2" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.818788 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.839528 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-mvgg8"] Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.860134 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-gn9zx"] Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.861517 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-gn9zx" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.890950 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.891111 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.891212 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-w75rj" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.903965 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-gn9zx"] Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.904901 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxmvj\" (UniqueName: \"kubernetes.io/projected/bf547f01-0021-4f93-ae9b-a7afa5016c6a-kube-api-access-jxmvj\") pod \"barbican-db-sync-mvgg8\" (UID: \"bf547f01-0021-4f93-ae9b-a7afa5016c6a\") " pod="openstack/barbican-db-sync-mvgg8" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.905028 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf547f01-0021-4f93-ae9b-a7afa5016c6a-combined-ca-bundle\") pod \"barbican-db-sync-mvgg8\" (UID: \"bf547f01-0021-4f93-ae9b-a7afa5016c6a\") " pod="openstack/barbican-db-sync-mvgg8" Nov 24 11:46:32 crc kubenswrapper[4789]: I1124 11:46:32.905080 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bf547f01-0021-4f93-ae9b-a7afa5016c6a-db-sync-config-data\") pod \"barbican-db-sync-mvgg8\" (UID: \"bf547f01-0021-4f93-ae9b-a7afa5016c6a\") " pod="openstack/barbican-db-sync-mvgg8" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.011560 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bf547f01-0021-4f93-ae9b-a7afa5016c6a-db-sync-config-data\") pod \"barbican-db-sync-mvgg8\" (UID: \"bf547f01-0021-4f93-ae9b-a7afa5016c6a\") " pod="openstack/barbican-db-sync-mvgg8" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.011620 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad19529b-59a5-42f3-8adf-ba14978e1f8a-logs\") pod \"placement-db-sync-gn9zx\" (UID: \"ad19529b-59a5-42f3-8adf-ba14978e1f8a\") " pod="openstack/placement-db-sync-gn9zx" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.011685 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxmvj\" (UniqueName: \"kubernetes.io/projected/bf547f01-0021-4f93-ae9b-a7afa5016c6a-kube-api-access-jxmvj\") pod \"barbican-db-sync-mvgg8\" (UID: \"bf547f01-0021-4f93-ae9b-a7afa5016c6a\") " pod="openstack/barbican-db-sync-mvgg8" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.011716 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcsxw\" (UniqueName: \"kubernetes.io/projected/ad19529b-59a5-42f3-8adf-ba14978e1f8a-kube-api-access-jcsxw\") pod \"placement-db-sync-gn9zx\" (UID: \"ad19529b-59a5-42f3-8adf-ba14978e1f8a\") " pod="openstack/placement-db-sync-gn9zx" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.011769 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad19529b-59a5-42f3-8adf-ba14978e1f8a-combined-ca-bundle\") pod \"placement-db-sync-gn9zx\" (UID: \"ad19529b-59a5-42f3-8adf-ba14978e1f8a\") " pod="openstack/placement-db-sync-gn9zx" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.011789 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad19529b-59a5-42f3-8adf-ba14978e1f8a-scripts\") pod \"placement-db-sync-gn9zx\" (UID: \"ad19529b-59a5-42f3-8adf-ba14978e1f8a\") " pod="openstack/placement-db-sync-gn9zx" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.011821 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf547f01-0021-4f93-ae9b-a7afa5016c6a-combined-ca-bundle\") pod \"barbican-db-sync-mvgg8\" (UID: \"bf547f01-0021-4f93-ae9b-a7afa5016c6a\") " pod="openstack/barbican-db-sync-mvgg8" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.011844 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad19529b-59a5-42f3-8adf-ba14978e1f8a-config-data\") pod \"placement-db-sync-gn9zx\" (UID: \"ad19529b-59a5-42f3-8adf-ba14978e1f8a\") " pod="openstack/placement-db-sync-gn9zx" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.019800 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf547f01-0021-4f93-ae9b-a7afa5016c6a-combined-ca-bundle\") pod \"barbican-db-sync-mvgg8\" (UID: \"bf547f01-0021-4f93-ae9b-a7afa5016c6a\") " pod="openstack/barbican-db-sync-mvgg8" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.021125 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bf547f01-0021-4f93-ae9b-a7afa5016c6a-db-sync-config-data\") pod \"barbican-db-sync-mvgg8\" (UID: \"bf547f01-0021-4f93-ae9b-a7afa5016c6a\") " pod="openstack/barbican-db-sync-mvgg8" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.032007 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-7s7v7"] Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.033909 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-7s7v7" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.045045 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.054037 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.054293 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-vwb4m" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.054424 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.060074 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.064364 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxmvj\" (UniqueName: \"kubernetes.io/projected/bf547f01-0021-4f93-ae9b-a7afa5016c6a-kube-api-access-jxmvj\") pod \"barbican-db-sync-mvgg8\" (UID: \"bf547f01-0021-4f93-ae9b-a7afa5016c6a\") " pod="openstack/barbican-db-sync-mvgg8" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.099124 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-msb22" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.102825 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.102889 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.103932 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-4tz9j"] Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.114771 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcsxw\" (UniqueName: \"kubernetes.io/projected/ad19529b-59a5-42f3-8adf-ba14978e1f8a-kube-api-access-jcsxw\") pod \"placement-db-sync-gn9zx\" (UID: \"ad19529b-59a5-42f3-8adf-ba14978e1f8a\") " pod="openstack/placement-db-sync-gn9zx" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.114837 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad19529b-59a5-42f3-8adf-ba14978e1f8a-combined-ca-bundle\") pod \"placement-db-sync-gn9zx\" (UID: \"ad19529b-59a5-42f3-8adf-ba14978e1f8a\") " pod="openstack/placement-db-sync-gn9zx" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.114859 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad19529b-59a5-42f3-8adf-ba14978e1f8a-scripts\") pod \"placement-db-sync-gn9zx\" (UID: \"ad19529b-59a5-42f3-8adf-ba14978e1f8a\") " pod="openstack/placement-db-sync-gn9zx" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.114890 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad19529b-59a5-42f3-8adf-ba14978e1f8a-config-data\") pod \"placement-db-sync-gn9zx\" (UID: \"ad19529b-59a5-42f3-8adf-ba14978e1f8a\") " pod="openstack/placement-db-sync-gn9zx" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.114917 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad19529b-59a5-42f3-8adf-ba14978e1f8a-logs\") pod \"placement-db-sync-gn9zx\" (UID: \"ad19529b-59a5-42f3-8adf-ba14978e1f8a\") " pod="openstack/placement-db-sync-gn9zx" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.115386 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad19529b-59a5-42f3-8adf-ba14978e1f8a-logs\") pod \"placement-db-sync-gn9zx\" (UID: \"ad19529b-59a5-42f3-8adf-ba14978e1f8a\") " pod="openstack/placement-db-sync-gn9zx" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.147741 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.154982 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad19529b-59a5-42f3-8adf-ba14978e1f8a-scripts\") pod \"placement-db-sync-gn9zx\" (UID: \"ad19529b-59a5-42f3-8adf-ba14978e1f8a\") " pod="openstack/placement-db-sync-gn9zx" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.156083 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad19529b-59a5-42f3-8adf-ba14978e1f8a-combined-ca-bundle\") pod \"placement-db-sync-gn9zx\" (UID: \"ad19529b-59a5-42f3-8adf-ba14978e1f8a\") " pod="openstack/placement-db-sync-gn9zx" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.176648 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcsxw\" (UniqueName: \"kubernetes.io/projected/ad19529b-59a5-42f3-8adf-ba14978e1f8a-kube-api-access-jcsxw\") pod \"placement-db-sync-gn9zx\" (UID: \"ad19529b-59a5-42f3-8adf-ba14978e1f8a\") " pod="openstack/placement-db-sync-gn9zx" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.184555 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad19529b-59a5-42f3-8adf-ba14978e1f8a-config-data\") pod \"placement-db-sync-gn9zx\" (UID: \"ad19529b-59a5-42f3-8adf-ba14978e1f8a\") " pod="openstack/placement-db-sync-gn9zx" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.199740 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-7s7v7"] Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.201859 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-mvgg8" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.219421 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c87d408-bf3b-4156-9116-110b948e3ead-config-data\") pod \"ceilometer-0\" (UID: \"0c87d408-bf3b-4156-9116-110b948e3ead\") " pod="openstack/ceilometer-0" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.219527 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlpqf\" (UniqueName: \"kubernetes.io/projected/0c87d408-bf3b-4156-9116-110b948e3ead-kube-api-access-jlpqf\") pod \"ceilometer-0\" (UID: \"0c87d408-bf3b-4156-9116-110b948e3ead\") " pod="openstack/ceilometer-0" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.219543 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c87d408-bf3b-4156-9116-110b948e3ead-run-httpd\") pod \"ceilometer-0\" (UID: \"0c87d408-bf3b-4156-9116-110b948e3ead\") " pod="openstack/ceilometer-0" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.219566 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7ce66a07-c046-4c6c-b5a5-443818f1b5db-config\") pod \"neutron-db-sync-7s7v7\" (UID: \"7ce66a07-c046-4c6c-b5a5-443818f1b5db\") " pod="openstack/neutron-db-sync-7s7v7" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.219585 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c87d408-bf3b-4156-9116-110b948e3ead-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0c87d408-bf3b-4156-9116-110b948e3ead\") " pod="openstack/ceilometer-0" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.219607 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce66a07-c046-4c6c-b5a5-443818f1b5db-combined-ca-bundle\") pod \"neutron-db-sync-7s7v7\" (UID: \"7ce66a07-c046-4c6c-b5a5-443818f1b5db\") " pod="openstack/neutron-db-sync-7s7v7" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.219646 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c87d408-bf3b-4156-9116-110b948e3ead-scripts\") pod \"ceilometer-0\" (UID: \"0c87d408-bf3b-4156-9116-110b948e3ead\") " pod="openstack/ceilometer-0" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.219674 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c87d408-bf3b-4156-9116-110b948e3ead-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0c87d408-bf3b-4156-9116-110b948e3ead\") " pod="openstack/ceilometer-0" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.219709 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmrf7\" (UniqueName: \"kubernetes.io/projected/7ce66a07-c046-4c6c-b5a5-443818f1b5db-kube-api-access-wmrf7\") pod \"neutron-db-sync-7s7v7\" (UID: \"7ce66a07-c046-4c6c-b5a5-443818f1b5db\") " pod="openstack/neutron-db-sync-7s7v7" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.219730 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c87d408-bf3b-4156-9116-110b948e3ead-log-httpd\") pod \"ceilometer-0\" (UID: \"0c87d408-bf3b-4156-9116-110b948e3ead\") " pod="openstack/ceilometer-0" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.221123 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-gn9zx" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.228571 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-6bfk2"] Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.229963 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-6bfk2" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.294718 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-6bfk2"] Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.321323 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c87d408-bf3b-4156-9116-110b948e3ead-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0c87d408-bf3b-4156-9116-110b948e3ead\") " pod="openstack/ceilometer-0" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.321405 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmrf7\" (UniqueName: \"kubernetes.io/projected/7ce66a07-c046-4c6c-b5a5-443818f1b5db-kube-api-access-wmrf7\") pod \"neutron-db-sync-7s7v7\" (UID: \"7ce66a07-c046-4c6c-b5a5-443818f1b5db\") " pod="openstack/neutron-db-sync-7s7v7" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.321428 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c87d408-bf3b-4156-9116-110b948e3ead-log-httpd\") pod \"ceilometer-0\" (UID: \"0c87d408-bf3b-4156-9116-110b948e3ead\") " pod="openstack/ceilometer-0" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.325987 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c87d408-bf3b-4156-9116-110b948e3ead-config-data\") pod \"ceilometer-0\" (UID: \"0c87d408-bf3b-4156-9116-110b948e3ead\") " pod="openstack/ceilometer-0" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.326023 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e2acd55-a485-43c8-b3e5-88083c626aa0-config\") pod \"dnsmasq-dns-5b6dbdb6f5-6bfk2\" (UID: \"4e2acd55-a485-43c8-b3e5-88083c626aa0\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-6bfk2" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.326053 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e2acd55-a485-43c8-b3e5-88083c626aa0-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-6bfk2\" (UID: \"4e2acd55-a485-43c8-b3e5-88083c626aa0\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-6bfk2" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.326102 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlpqf\" (UniqueName: \"kubernetes.io/projected/0c87d408-bf3b-4156-9116-110b948e3ead-kube-api-access-jlpqf\") pod \"ceilometer-0\" (UID: \"0c87d408-bf3b-4156-9116-110b948e3ead\") " pod="openstack/ceilometer-0" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.326127 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c87d408-bf3b-4156-9116-110b948e3ead-run-httpd\") pod \"ceilometer-0\" (UID: \"0c87d408-bf3b-4156-9116-110b948e3ead\") " pod="openstack/ceilometer-0" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.326146 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e2acd55-a485-43c8-b3e5-88083c626aa0-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-6bfk2\" (UID: \"4e2acd55-a485-43c8-b3e5-88083c626aa0\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-6bfk2" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.326173 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8gsp\" (UniqueName: \"kubernetes.io/projected/4e2acd55-a485-43c8-b3e5-88083c626aa0-kube-api-access-g8gsp\") pod \"dnsmasq-dns-5b6dbdb6f5-6bfk2\" (UID: \"4e2acd55-a485-43c8-b3e5-88083c626aa0\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-6bfk2" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.326210 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7ce66a07-c046-4c6c-b5a5-443818f1b5db-config\") pod \"neutron-db-sync-7s7v7\" (UID: \"7ce66a07-c046-4c6c-b5a5-443818f1b5db\") " pod="openstack/neutron-db-sync-7s7v7" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.326259 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c87d408-bf3b-4156-9116-110b948e3ead-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0c87d408-bf3b-4156-9116-110b948e3ead\") " pod="openstack/ceilometer-0" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.326291 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce66a07-c046-4c6c-b5a5-443818f1b5db-combined-ca-bundle\") pod \"neutron-db-sync-7s7v7\" (UID: \"7ce66a07-c046-4c6c-b5a5-443818f1b5db\") " pod="openstack/neutron-db-sync-7s7v7" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.326310 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e2acd55-a485-43c8-b3e5-88083c626aa0-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-6bfk2\" (UID: \"4e2acd55-a485-43c8-b3e5-88083c626aa0\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-6bfk2" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.326397 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c87d408-bf3b-4156-9116-110b948e3ead-scripts\") pod \"ceilometer-0\" (UID: \"0c87d408-bf3b-4156-9116-110b948e3ead\") " pod="openstack/ceilometer-0" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.327217 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c87d408-bf3b-4156-9116-110b948e3ead-run-httpd\") pod \"ceilometer-0\" (UID: \"0c87d408-bf3b-4156-9116-110b948e3ead\") " pod="openstack/ceilometer-0" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.328316 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c87d408-bf3b-4156-9116-110b948e3ead-log-httpd\") pod \"ceilometer-0\" (UID: \"0c87d408-bf3b-4156-9116-110b948e3ead\") " pod="openstack/ceilometer-0" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.339084 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c87d408-bf3b-4156-9116-110b948e3ead-scripts\") pod \"ceilometer-0\" (UID: \"0c87d408-bf3b-4156-9116-110b948e3ead\") " pod="openstack/ceilometer-0" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.340653 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c87d408-bf3b-4156-9116-110b948e3ead-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0c87d408-bf3b-4156-9116-110b948e3ead\") " pod="openstack/ceilometer-0" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.340792 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7ce66a07-c046-4c6c-b5a5-443818f1b5db-config\") pod \"neutron-db-sync-7s7v7\" (UID: \"7ce66a07-c046-4c6c-b5a5-443818f1b5db\") " pod="openstack/neutron-db-sync-7s7v7" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.346444 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c87d408-bf3b-4156-9116-110b948e3ead-config-data\") pod \"ceilometer-0\" (UID: \"0c87d408-bf3b-4156-9116-110b948e3ead\") " pod="openstack/ceilometer-0" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.349819 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c87d408-bf3b-4156-9116-110b948e3ead-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0c87d408-bf3b-4156-9116-110b948e3ead\") " pod="openstack/ceilometer-0" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.367023 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce66a07-c046-4c6c-b5a5-443818f1b5db-combined-ca-bundle\") pod \"neutron-db-sync-7s7v7\" (UID: \"7ce66a07-c046-4c6c-b5a5-443818f1b5db\") " pod="openstack/neutron-db-sync-7s7v7" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.371725 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmrf7\" (UniqueName: \"kubernetes.io/projected/7ce66a07-c046-4c6c-b5a5-443818f1b5db-kube-api-access-wmrf7\") pod \"neutron-db-sync-7s7v7\" (UID: \"7ce66a07-c046-4c6c-b5a5-443818f1b5db\") " pod="openstack/neutron-db-sync-7s7v7" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.377817 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlpqf\" (UniqueName: \"kubernetes.io/projected/0c87d408-bf3b-4156-9116-110b948e3ead-kube-api-access-jlpqf\") pod \"ceilometer-0\" (UID: \"0c87d408-bf3b-4156-9116-110b948e3ead\") " pod="openstack/ceilometer-0" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.380558 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-4tz9j"] Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.432307 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e2acd55-a485-43c8-b3e5-88083c626aa0-config\") pod \"dnsmasq-dns-5b6dbdb6f5-6bfk2\" (UID: \"4e2acd55-a485-43c8-b3e5-88083c626aa0\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-6bfk2" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.432356 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e2acd55-a485-43c8-b3e5-88083c626aa0-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-6bfk2\" (UID: \"4e2acd55-a485-43c8-b3e5-88083c626aa0\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-6bfk2" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.432397 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e2acd55-a485-43c8-b3e5-88083c626aa0-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-6bfk2\" (UID: \"4e2acd55-a485-43c8-b3e5-88083c626aa0\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-6bfk2" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.432413 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8gsp\" (UniqueName: \"kubernetes.io/projected/4e2acd55-a485-43c8-b3e5-88083c626aa0-kube-api-access-g8gsp\") pod \"dnsmasq-dns-5b6dbdb6f5-6bfk2\" (UID: \"4e2acd55-a485-43c8-b3e5-88083c626aa0\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-6bfk2" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.432482 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e2acd55-a485-43c8-b3e5-88083c626aa0-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-6bfk2\" (UID: \"4e2acd55-a485-43c8-b3e5-88083c626aa0\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-6bfk2" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.440347 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e2acd55-a485-43c8-b3e5-88083c626aa0-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-6bfk2\" (UID: \"4e2acd55-a485-43c8-b3e5-88083c626aa0\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-6bfk2" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.440960 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e2acd55-a485-43c8-b3e5-88083c626aa0-config\") pod \"dnsmasq-dns-5b6dbdb6f5-6bfk2\" (UID: \"4e2acd55-a485-43c8-b3e5-88083c626aa0\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-6bfk2" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.442143 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e2acd55-a485-43c8-b3e5-88083c626aa0-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-6bfk2\" (UID: \"4e2acd55-a485-43c8-b3e5-88083c626aa0\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-6bfk2" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.450933 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e2acd55-a485-43c8-b3e5-88083c626aa0-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-6bfk2\" (UID: \"4e2acd55-a485-43c8-b3e5-88083c626aa0\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-6bfk2" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.474160 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8gsp\" (UniqueName: \"kubernetes.io/projected/4e2acd55-a485-43c8-b3e5-88083c626aa0-kube-api-access-g8gsp\") pod \"dnsmasq-dns-5b6dbdb6f5-6bfk2\" (UID: \"4e2acd55-a485-43c8-b3e5-88083c626aa0\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-6bfk2" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.528990 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-7s7v7" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.559068 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-6bfk2" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.561376 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.742087 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-mvgg8"] Nov 24 11:46:33 crc kubenswrapper[4789]: W1124 11:46:33.801735 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd057fecf_b22d_4304_9ce4_4fbbd358ecc5.slice/crio-efa4fd79fbea09df4cd1b1f98267ff7c150cd1e3942257954320b1e8643c2122 WatchSource:0}: Error finding container efa4fd79fbea09df4cd1b1f98267ff7c150cd1e3942257954320b1e8643c2122: Status 404 returned error can't find the container with id efa4fd79fbea09df4cd1b1f98267ff7c150cd1e3942257954320b1e8643c2122 Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.805589 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-gvrrj"] Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.837107 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-msb22"] Nov 24 11:46:33 crc kubenswrapper[4789]: W1124 11:46:33.850600 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e41ad3b_8d25_49db_8c15_4a3a57f47e2f.slice/crio-4280a6e1c5950e3b00092cd12076c9b1481e5259c798782b177a411d6dd30963 WatchSource:0}: Error finding container 4280a6e1c5950e3b00092cd12076c9b1481e5259c798782b177a411d6dd30963: Status 404 returned error can't find the container with id 4280a6e1c5950e3b00092cd12076c9b1481e5259c798782b177a411d6dd30963 Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.948642 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-msb22" event={"ID":"2e41ad3b-8d25-49db-8c15-4a3a57f47e2f","Type":"ContainerStarted","Data":"4280a6e1c5950e3b00092cd12076c9b1481e5259c798782b177a411d6dd30963"} Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.956304 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gvrrj" event={"ID":"d057fecf-b22d-4304-9ce4-4fbbd358ecc5","Type":"ContainerStarted","Data":"efa4fd79fbea09df4cd1b1f98267ff7c150cd1e3942257954320b1e8643c2122"} Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.972499 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67795cd9-4tz9j" event={"ID":"b1cea940-ff88-4c80-98fb-548eef2631e1","Type":"ContainerStarted","Data":"8449577c7ef87ccff6ae7cbffd69f0e1231e8b1c8995bf4b9765352c2a4dbbc3"} Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.972534 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67795cd9-4tz9j" event={"ID":"b1cea940-ff88-4c80-98fb-548eef2631e1","Type":"ContainerStarted","Data":"6eaac0009f23f2c6b57f209e39094c4779a195e1f4098a13798df44a7a070cf6"} Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.972652 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67795cd9-4tz9j" podUID="b1cea940-ff88-4c80-98fb-548eef2631e1" containerName="init" containerID="cri-o://8449577c7ef87ccff6ae7cbffd69f0e1231e8b1c8995bf4b9765352c2a4dbbc3" gracePeriod=10 Nov 24 11:46:33 crc kubenswrapper[4789]: I1124 11:46:33.982564 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-mvgg8" event={"ID":"bf547f01-0021-4f93-ae9b-a7afa5016c6a","Type":"ContainerStarted","Data":"0be149f3213f1fffa7a28c8587c91247d364ff994cff6b37eb561cff9a625da5"} Nov 24 11:46:34 crc kubenswrapper[4789]: I1124 11:46:34.101574 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-gn9zx"] Nov 24 11:46:34 crc kubenswrapper[4789]: W1124 11:46:34.109310 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad19529b_59a5_42f3_8adf_ba14978e1f8a.slice/crio-0b88730f5ef4ea56b3035d54604954c51a9e153c5bca9a110448bfbd0ab84ade WatchSource:0}: Error finding container 0b88730f5ef4ea56b3035d54604954c51a9e153c5bca9a110448bfbd0ab84ade: Status 404 returned error can't find the container with id 0b88730f5ef4ea56b3035d54604954c51a9e153c5bca9a110448bfbd0ab84ade Nov 24 11:46:34 crc kubenswrapper[4789]: I1124 11:46:34.168505 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:46:34 crc kubenswrapper[4789]: I1124 11:46:34.245226 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-7s7v7"] Nov 24 11:46:34 crc kubenswrapper[4789]: W1124 11:46:34.248400 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ce66a07_c046_4c6c_b5a5_443818f1b5db.slice/crio-812e9a6d07f15c65f265d8f9a1a84ec0b28980ee881cbf2502ce0a4838bd159f WatchSource:0}: Error finding container 812e9a6d07f15c65f265d8f9a1a84ec0b28980ee881cbf2502ce0a4838bd159f: Status 404 returned error can't find the container with id 812e9a6d07f15c65f265d8f9a1a84ec0b28980ee881cbf2502ce0a4838bd159f Nov 24 11:46:34 crc kubenswrapper[4789]: I1124 11:46:34.308131 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-6bfk2"] Nov 24 11:46:34 crc kubenswrapper[4789]: I1124 11:46:34.932589 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-4tz9j" Nov 24 11:46:35 crc kubenswrapper[4789]: I1124 11:46:34.999784 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c87d408-bf3b-4156-9116-110b948e3ead","Type":"ContainerStarted","Data":"4cad5290bcab57fa34e85cfd3463e4975002f4a93e7cd150076daa9de74f9295"} Nov 24 11:46:35 crc kubenswrapper[4789]: I1124 11:46:35.022139 4789 generic.go:334] "Generic (PLEG): container finished" podID="b1cea940-ff88-4c80-98fb-548eef2631e1" containerID="8449577c7ef87ccff6ae7cbffd69f0e1231e8b1c8995bf4b9765352c2a4dbbc3" exitCode=0 Nov 24 11:46:35 crc kubenswrapper[4789]: I1124 11:46:35.022224 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67795cd9-4tz9j" event={"ID":"b1cea940-ff88-4c80-98fb-548eef2631e1","Type":"ContainerDied","Data":"8449577c7ef87ccff6ae7cbffd69f0e1231e8b1c8995bf4b9765352c2a4dbbc3"} Nov 24 11:46:35 crc kubenswrapper[4789]: I1124 11:46:35.022299 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67795cd9-4tz9j" event={"ID":"b1cea940-ff88-4c80-98fb-548eef2631e1","Type":"ContainerDied","Data":"6eaac0009f23f2c6b57f209e39094c4779a195e1f4098a13798df44a7a070cf6"} Nov 24 11:46:35 crc kubenswrapper[4789]: I1124 11:46:35.022315 4789 scope.go:117] "RemoveContainer" containerID="8449577c7ef87ccff6ae7cbffd69f0e1231e8b1c8995bf4b9765352c2a4dbbc3" Nov 24 11:46:35 crc kubenswrapper[4789]: I1124 11:46:35.022519 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-4tz9j" Nov 24 11:46:35 crc kubenswrapper[4789]: I1124 11:46:35.036497 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-gn9zx" event={"ID":"ad19529b-59a5-42f3-8adf-ba14978e1f8a","Type":"ContainerStarted","Data":"0b88730f5ef4ea56b3035d54604954c51a9e153c5bca9a110448bfbd0ab84ade"} Nov 24 11:46:35 crc kubenswrapper[4789]: I1124 11:46:35.044637 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gvrrj" event={"ID":"d057fecf-b22d-4304-9ce4-4fbbd358ecc5","Type":"ContainerStarted","Data":"4468ec69241d242d18355eabb44c8175ffd56094d1c1619620fa8455b26ad737"} Nov 24 11:46:35 crc kubenswrapper[4789]: I1124 11:46:35.057128 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-7s7v7" event={"ID":"7ce66a07-c046-4c6c-b5a5-443818f1b5db","Type":"ContainerStarted","Data":"326d01aed54a27faad41244ea6c18159d3da2e453337a0d01eff0fbbb474da84"} Nov 24 11:46:35 crc kubenswrapper[4789]: I1124 11:46:35.057182 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-7s7v7" event={"ID":"7ce66a07-c046-4c6c-b5a5-443818f1b5db","Type":"ContainerStarted","Data":"812e9a6d07f15c65f265d8f9a1a84ec0b28980ee881cbf2502ce0a4838bd159f"} Nov 24 11:46:35 crc kubenswrapper[4789]: I1124 11:46:35.059826 4789 generic.go:334] "Generic (PLEG): container finished" podID="4e2acd55-a485-43c8-b3e5-88083c626aa0" containerID="d87e7363d763cc8f6d5f4402c241d476613fb35afd2ee3b2a17771dc18d5289d" exitCode=0 Nov 24 11:46:35 crc kubenswrapper[4789]: I1124 11:46:35.059868 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-6bfk2" event={"ID":"4e2acd55-a485-43c8-b3e5-88083c626aa0","Type":"ContainerDied","Data":"d87e7363d763cc8f6d5f4402c241d476613fb35afd2ee3b2a17771dc18d5289d"} Nov 24 11:46:35 crc kubenswrapper[4789]: I1124 11:46:35.059890 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-6bfk2" event={"ID":"4e2acd55-a485-43c8-b3e5-88083c626aa0","Type":"ContainerStarted","Data":"cf1dc713208fead4773d998722d7e9775ee0dba882e62b8974f2f85a928669ba"} Nov 24 11:46:35 crc kubenswrapper[4789]: I1124 11:46:35.076607 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-gvrrj" podStartSLOduration=3.076588539 podStartE2EDuration="3.076588539s" podCreationTimestamp="2025-11-24 11:46:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:46:35.072380917 +0000 UTC m=+977.654852296" watchObservedRunningTime="2025-11-24 11:46:35.076588539 +0000 UTC m=+977.659059918" Nov 24 11:46:35 crc kubenswrapper[4789]: I1124 11:46:35.083061 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1cea940-ff88-4c80-98fb-548eef2631e1-dns-svc\") pod \"b1cea940-ff88-4c80-98fb-548eef2631e1\" (UID: \"b1cea940-ff88-4c80-98fb-548eef2631e1\") " Nov 24 11:46:35 crc kubenswrapper[4789]: I1124 11:46:35.083226 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgvzm\" (UniqueName: \"kubernetes.io/projected/b1cea940-ff88-4c80-98fb-548eef2631e1-kube-api-access-pgvzm\") pod \"b1cea940-ff88-4c80-98fb-548eef2631e1\" (UID: \"b1cea940-ff88-4c80-98fb-548eef2631e1\") " Nov 24 11:46:35 crc kubenswrapper[4789]: I1124 11:46:35.083434 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1cea940-ff88-4c80-98fb-548eef2631e1-config\") pod \"b1cea940-ff88-4c80-98fb-548eef2631e1\" (UID: \"b1cea940-ff88-4c80-98fb-548eef2631e1\") " Nov 24 11:46:35 crc kubenswrapper[4789]: I1124 11:46:35.083577 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1cea940-ff88-4c80-98fb-548eef2631e1-ovsdbserver-sb\") pod \"b1cea940-ff88-4c80-98fb-548eef2631e1\" (UID: \"b1cea940-ff88-4c80-98fb-548eef2631e1\") " Nov 24 11:46:35 crc kubenswrapper[4789]: I1124 11:46:35.083698 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1cea940-ff88-4c80-98fb-548eef2631e1-ovsdbserver-nb\") pod \"b1cea940-ff88-4c80-98fb-548eef2631e1\" (UID: \"b1cea940-ff88-4c80-98fb-548eef2631e1\") " Nov 24 11:46:35 crc kubenswrapper[4789]: I1124 11:46:35.089985 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1cea940-ff88-4c80-98fb-548eef2631e1-kube-api-access-pgvzm" (OuterVolumeSpecName: "kube-api-access-pgvzm") pod "b1cea940-ff88-4c80-98fb-548eef2631e1" (UID: "b1cea940-ff88-4c80-98fb-548eef2631e1"). InnerVolumeSpecName "kube-api-access-pgvzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:46:35 crc kubenswrapper[4789]: I1124 11:46:35.138049 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1cea940-ff88-4c80-98fb-548eef2631e1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b1cea940-ff88-4c80-98fb-548eef2631e1" (UID: "b1cea940-ff88-4c80-98fb-548eef2631e1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:46:35 crc kubenswrapper[4789]: I1124 11:46:35.144134 4789 scope.go:117] "RemoveContainer" containerID="8449577c7ef87ccff6ae7cbffd69f0e1231e8b1c8995bf4b9765352c2a4dbbc3" Nov 24 11:46:35 crc kubenswrapper[4789]: E1124 11:46:35.146158 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8449577c7ef87ccff6ae7cbffd69f0e1231e8b1c8995bf4b9765352c2a4dbbc3\": container with ID starting with 8449577c7ef87ccff6ae7cbffd69f0e1231e8b1c8995bf4b9765352c2a4dbbc3 not found: ID does not exist" containerID="8449577c7ef87ccff6ae7cbffd69f0e1231e8b1c8995bf4b9765352c2a4dbbc3" Nov 24 11:46:35 crc kubenswrapper[4789]: I1124 11:46:35.146184 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8449577c7ef87ccff6ae7cbffd69f0e1231e8b1c8995bf4b9765352c2a4dbbc3"} err="failed to get container status \"8449577c7ef87ccff6ae7cbffd69f0e1231e8b1c8995bf4b9765352c2a4dbbc3\": rpc error: code = NotFound desc = could not find container \"8449577c7ef87ccff6ae7cbffd69f0e1231e8b1c8995bf4b9765352c2a4dbbc3\": container with ID starting with 8449577c7ef87ccff6ae7cbffd69f0e1231e8b1c8995bf4b9765352c2a4dbbc3 not found: ID does not exist" Nov 24 11:46:35 crc kubenswrapper[4789]: I1124 11:46:35.168107 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1cea940-ff88-4c80-98fb-548eef2631e1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b1cea940-ff88-4c80-98fb-548eef2631e1" (UID: "b1cea940-ff88-4c80-98fb-548eef2631e1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:46:35 crc kubenswrapper[4789]: I1124 11:46:35.168747 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1cea940-ff88-4c80-98fb-548eef2631e1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b1cea940-ff88-4c80-98fb-548eef2631e1" (UID: "b1cea940-ff88-4c80-98fb-548eef2631e1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:46:35 crc kubenswrapper[4789]: I1124 11:46:35.169964 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-7s7v7" podStartSLOduration=3.169939659 podStartE2EDuration="3.169939659s" podCreationTimestamp="2025-11-24 11:46:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:46:35.140931907 +0000 UTC m=+977.723403286" watchObservedRunningTime="2025-11-24 11:46:35.169939659 +0000 UTC m=+977.752411038" Nov 24 11:46:35 crc kubenswrapper[4789]: I1124 11:46:35.176682 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1cea940-ff88-4c80-98fb-548eef2631e1-config" (OuterVolumeSpecName: "config") pod "b1cea940-ff88-4c80-98fb-548eef2631e1" (UID: "b1cea940-ff88-4c80-98fb-548eef2631e1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:46:35 crc kubenswrapper[4789]: I1124 11:46:35.197681 4789 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1cea940-ff88-4c80-98fb-548eef2631e1-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:35 crc kubenswrapper[4789]: I1124 11:46:35.197711 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgvzm\" (UniqueName: \"kubernetes.io/projected/b1cea940-ff88-4c80-98fb-548eef2631e1-kube-api-access-pgvzm\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:35 crc kubenswrapper[4789]: I1124 11:46:35.197721 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1cea940-ff88-4c80-98fb-548eef2631e1-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:35 crc kubenswrapper[4789]: I1124 11:46:35.197733 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1cea940-ff88-4c80-98fb-548eef2631e1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:35 crc kubenswrapper[4789]: I1124 11:46:35.197742 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1cea940-ff88-4c80-98fb-548eef2631e1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:35 crc kubenswrapper[4789]: I1124 11:46:35.431620 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-4tz9j"] Nov 24 11:46:35 crc kubenswrapper[4789]: I1124 11:46:35.453583 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-4tz9j"] Nov 24 11:46:35 crc kubenswrapper[4789]: I1124 11:46:35.591556 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:46:36 crc kubenswrapper[4789]: I1124 11:46:36.073408 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-6bfk2" event={"ID":"4e2acd55-a485-43c8-b3e5-88083c626aa0","Type":"ContainerStarted","Data":"ca77ed599ef95c42e6450de71bc3f711651b1528f791d5e1185b080b1195d4a1"} Nov 24 11:46:36 crc kubenswrapper[4789]: I1124 11:46:36.073586 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b6dbdb6f5-6bfk2" Nov 24 11:46:36 crc kubenswrapper[4789]: I1124 11:46:36.097788 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b6dbdb6f5-6bfk2" podStartSLOduration=3.097713351 podStartE2EDuration="3.097713351s" podCreationTimestamp="2025-11-24 11:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:46:36.092105245 +0000 UTC m=+978.674576624" watchObservedRunningTime="2025-11-24 11:46:36.097713351 +0000 UTC m=+978.680184730" Nov 24 11:46:36 crc kubenswrapper[4789]: I1124 11:46:36.185615 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1cea940-ff88-4c80-98fb-548eef2631e1" path="/var/lib/kubelet/pods/b1cea940-ff88-4c80-98fb-548eef2631e1/volumes" Nov 24 11:46:38 crc kubenswrapper[4789]: I1124 11:46:38.092396 4789 generic.go:334] "Generic (PLEG): container finished" podID="d057fecf-b22d-4304-9ce4-4fbbd358ecc5" containerID="4468ec69241d242d18355eabb44c8175ffd56094d1c1619620fa8455b26ad737" exitCode=0 Nov 24 11:46:38 crc kubenswrapper[4789]: I1124 11:46:38.092449 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gvrrj" event={"ID":"d057fecf-b22d-4304-9ce4-4fbbd358ecc5","Type":"ContainerDied","Data":"4468ec69241d242d18355eabb44c8175ffd56094d1c1619620fa8455b26ad737"} Nov 24 11:46:39 crc kubenswrapper[4789]: I1124 11:46:39.741473 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gvrrj" Nov 24 11:46:39 crc kubenswrapper[4789]: I1124 11:46:39.892051 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d057fecf-b22d-4304-9ce4-4fbbd358ecc5-config-data\") pod \"d057fecf-b22d-4304-9ce4-4fbbd358ecc5\" (UID: \"d057fecf-b22d-4304-9ce4-4fbbd358ecc5\") " Nov 24 11:46:39 crc kubenswrapper[4789]: I1124 11:46:39.892118 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zglrl\" (UniqueName: \"kubernetes.io/projected/d057fecf-b22d-4304-9ce4-4fbbd358ecc5-kube-api-access-zglrl\") pod \"d057fecf-b22d-4304-9ce4-4fbbd358ecc5\" (UID: \"d057fecf-b22d-4304-9ce4-4fbbd358ecc5\") " Nov 24 11:46:39 crc kubenswrapper[4789]: I1124 11:46:39.892142 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d057fecf-b22d-4304-9ce4-4fbbd358ecc5-credential-keys\") pod \"d057fecf-b22d-4304-9ce4-4fbbd358ecc5\" (UID: \"d057fecf-b22d-4304-9ce4-4fbbd358ecc5\") " Nov 24 11:46:39 crc kubenswrapper[4789]: I1124 11:46:39.892203 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d057fecf-b22d-4304-9ce4-4fbbd358ecc5-scripts\") pod \"d057fecf-b22d-4304-9ce4-4fbbd358ecc5\" (UID: \"d057fecf-b22d-4304-9ce4-4fbbd358ecc5\") " Nov 24 11:46:39 crc kubenswrapper[4789]: I1124 11:46:39.892227 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d057fecf-b22d-4304-9ce4-4fbbd358ecc5-fernet-keys\") pod \"d057fecf-b22d-4304-9ce4-4fbbd358ecc5\" (UID: \"d057fecf-b22d-4304-9ce4-4fbbd358ecc5\") " Nov 24 11:46:39 crc kubenswrapper[4789]: I1124 11:46:39.892282 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d057fecf-b22d-4304-9ce4-4fbbd358ecc5-combined-ca-bundle\") pod \"d057fecf-b22d-4304-9ce4-4fbbd358ecc5\" (UID: \"d057fecf-b22d-4304-9ce4-4fbbd358ecc5\") " Nov 24 11:46:39 crc kubenswrapper[4789]: I1124 11:46:39.899112 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d057fecf-b22d-4304-9ce4-4fbbd358ecc5-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d057fecf-b22d-4304-9ce4-4fbbd358ecc5" (UID: "d057fecf-b22d-4304-9ce4-4fbbd358ecc5"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:46:39 crc kubenswrapper[4789]: I1124 11:46:39.899296 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d057fecf-b22d-4304-9ce4-4fbbd358ecc5-kube-api-access-zglrl" (OuterVolumeSpecName: "kube-api-access-zglrl") pod "d057fecf-b22d-4304-9ce4-4fbbd358ecc5" (UID: "d057fecf-b22d-4304-9ce4-4fbbd358ecc5"). InnerVolumeSpecName "kube-api-access-zglrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:46:39 crc kubenswrapper[4789]: I1124 11:46:39.900581 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d057fecf-b22d-4304-9ce4-4fbbd358ecc5-scripts" (OuterVolumeSpecName: "scripts") pod "d057fecf-b22d-4304-9ce4-4fbbd358ecc5" (UID: "d057fecf-b22d-4304-9ce4-4fbbd358ecc5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:46:39 crc kubenswrapper[4789]: I1124 11:46:39.900667 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d057fecf-b22d-4304-9ce4-4fbbd358ecc5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d057fecf-b22d-4304-9ce4-4fbbd358ecc5" (UID: "d057fecf-b22d-4304-9ce4-4fbbd358ecc5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:46:39 crc kubenswrapper[4789]: I1124 11:46:39.930160 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d057fecf-b22d-4304-9ce4-4fbbd358ecc5-config-data" (OuterVolumeSpecName: "config-data") pod "d057fecf-b22d-4304-9ce4-4fbbd358ecc5" (UID: "d057fecf-b22d-4304-9ce4-4fbbd358ecc5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:46:39 crc kubenswrapper[4789]: I1124 11:46:39.931369 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d057fecf-b22d-4304-9ce4-4fbbd358ecc5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d057fecf-b22d-4304-9ce4-4fbbd358ecc5" (UID: "d057fecf-b22d-4304-9ce4-4fbbd358ecc5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:46:39 crc kubenswrapper[4789]: I1124 11:46:39.994506 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d057fecf-b22d-4304-9ce4-4fbbd358ecc5-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:39 crc kubenswrapper[4789]: I1124 11:46:39.994535 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zglrl\" (UniqueName: \"kubernetes.io/projected/d057fecf-b22d-4304-9ce4-4fbbd358ecc5-kube-api-access-zglrl\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:39 crc kubenswrapper[4789]: I1124 11:46:39.994548 4789 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d057fecf-b22d-4304-9ce4-4fbbd358ecc5-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:39 crc kubenswrapper[4789]: I1124 11:46:39.994558 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d057fecf-b22d-4304-9ce4-4fbbd358ecc5-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:39 crc kubenswrapper[4789]: I1124 11:46:39.994566 4789 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d057fecf-b22d-4304-9ce4-4fbbd358ecc5-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:39 crc kubenswrapper[4789]: I1124 11:46:39.994574 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d057fecf-b22d-4304-9ce4-4fbbd358ecc5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:40 crc kubenswrapper[4789]: I1124 11:46:40.109082 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gvrrj" event={"ID":"d057fecf-b22d-4304-9ce4-4fbbd358ecc5","Type":"ContainerDied","Data":"efa4fd79fbea09df4cd1b1f98267ff7c150cd1e3942257954320b1e8643c2122"} Nov 24 11:46:40 crc kubenswrapper[4789]: I1124 11:46:40.109123 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efa4fd79fbea09df4cd1b1f98267ff7c150cd1e3942257954320b1e8643c2122" Nov 24 11:46:40 crc kubenswrapper[4789]: I1124 11:46:40.109182 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gvrrj" Nov 24 11:46:40 crc kubenswrapper[4789]: I1124 11:46:40.270378 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-gvrrj"] Nov 24 11:46:40 crc kubenswrapper[4789]: I1124 11:46:40.278347 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-gvrrj"] Nov 24 11:46:40 crc kubenswrapper[4789]: I1124 11:46:40.387809 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-fwr22"] Nov 24 11:46:40 crc kubenswrapper[4789]: E1124 11:46:40.388116 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d057fecf-b22d-4304-9ce4-4fbbd358ecc5" containerName="keystone-bootstrap" Nov 24 11:46:40 crc kubenswrapper[4789]: I1124 11:46:40.388131 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="d057fecf-b22d-4304-9ce4-4fbbd358ecc5" containerName="keystone-bootstrap" Nov 24 11:46:40 crc kubenswrapper[4789]: E1124 11:46:40.388171 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1cea940-ff88-4c80-98fb-548eef2631e1" containerName="init" Nov 24 11:46:40 crc kubenswrapper[4789]: I1124 11:46:40.388179 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1cea940-ff88-4c80-98fb-548eef2631e1" containerName="init" Nov 24 11:46:40 crc kubenswrapper[4789]: I1124 11:46:40.388343 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1cea940-ff88-4c80-98fb-548eef2631e1" containerName="init" Nov 24 11:46:40 crc kubenswrapper[4789]: I1124 11:46:40.388365 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="d057fecf-b22d-4304-9ce4-4fbbd358ecc5" containerName="keystone-bootstrap" Nov 24 11:46:40 crc kubenswrapper[4789]: I1124 11:46:40.388894 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fwr22" Nov 24 11:46:40 crc kubenswrapper[4789]: I1124 11:46:40.394057 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gpqd2" Nov 24 11:46:40 crc kubenswrapper[4789]: I1124 11:46:40.394622 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 11:46:40 crc kubenswrapper[4789]: I1124 11:46:40.394886 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 11:46:40 crc kubenswrapper[4789]: I1124 11:46:40.395040 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 24 11:46:40 crc kubenswrapper[4789]: I1124 11:46:40.400863 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 11:46:40 crc kubenswrapper[4789]: I1124 11:46:40.415858 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-fwr22"] Nov 24 11:46:40 crc kubenswrapper[4789]: I1124 11:46:40.502412 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0c3fb8f-0aab-4e51-bfa0-50e905479f77-scripts\") pod \"keystone-bootstrap-fwr22\" (UID: \"b0c3fb8f-0aab-4e51-bfa0-50e905479f77\") " pod="openstack/keystone-bootstrap-fwr22" Nov 24 11:46:40 crc kubenswrapper[4789]: I1124 11:46:40.502500 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b0c3fb8f-0aab-4e51-bfa0-50e905479f77-fernet-keys\") pod \"keystone-bootstrap-fwr22\" (UID: \"b0c3fb8f-0aab-4e51-bfa0-50e905479f77\") " pod="openstack/keystone-bootstrap-fwr22" Nov 24 11:46:40 crc kubenswrapper[4789]: I1124 11:46:40.502632 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0c3fb8f-0aab-4e51-bfa0-50e905479f77-config-data\") pod \"keystone-bootstrap-fwr22\" (UID: \"b0c3fb8f-0aab-4e51-bfa0-50e905479f77\") " pod="openstack/keystone-bootstrap-fwr22" Nov 24 11:46:40 crc kubenswrapper[4789]: I1124 11:46:40.502731 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb2nv\" (UniqueName: \"kubernetes.io/projected/b0c3fb8f-0aab-4e51-bfa0-50e905479f77-kube-api-access-zb2nv\") pod \"keystone-bootstrap-fwr22\" (UID: \"b0c3fb8f-0aab-4e51-bfa0-50e905479f77\") " pod="openstack/keystone-bootstrap-fwr22" Nov 24 11:46:40 crc kubenswrapper[4789]: I1124 11:46:40.502759 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b0c3fb8f-0aab-4e51-bfa0-50e905479f77-credential-keys\") pod \"keystone-bootstrap-fwr22\" (UID: \"b0c3fb8f-0aab-4e51-bfa0-50e905479f77\") " pod="openstack/keystone-bootstrap-fwr22" Nov 24 11:46:40 crc kubenswrapper[4789]: I1124 11:46:40.502795 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0c3fb8f-0aab-4e51-bfa0-50e905479f77-combined-ca-bundle\") pod \"keystone-bootstrap-fwr22\" (UID: \"b0c3fb8f-0aab-4e51-bfa0-50e905479f77\") " pod="openstack/keystone-bootstrap-fwr22" Nov 24 11:46:40 crc kubenswrapper[4789]: I1124 11:46:40.604027 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0c3fb8f-0aab-4e51-bfa0-50e905479f77-scripts\") pod \"keystone-bootstrap-fwr22\" (UID: \"b0c3fb8f-0aab-4e51-bfa0-50e905479f77\") " pod="openstack/keystone-bootstrap-fwr22" Nov 24 11:46:40 crc kubenswrapper[4789]: I1124 11:46:40.604071 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b0c3fb8f-0aab-4e51-bfa0-50e905479f77-fernet-keys\") pod \"keystone-bootstrap-fwr22\" (UID: \"b0c3fb8f-0aab-4e51-bfa0-50e905479f77\") " pod="openstack/keystone-bootstrap-fwr22" Nov 24 11:46:40 crc kubenswrapper[4789]: I1124 11:46:40.604113 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0c3fb8f-0aab-4e51-bfa0-50e905479f77-config-data\") pod \"keystone-bootstrap-fwr22\" (UID: \"b0c3fb8f-0aab-4e51-bfa0-50e905479f77\") " pod="openstack/keystone-bootstrap-fwr22" Nov 24 11:46:40 crc kubenswrapper[4789]: I1124 11:46:40.604154 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb2nv\" (UniqueName: \"kubernetes.io/projected/b0c3fb8f-0aab-4e51-bfa0-50e905479f77-kube-api-access-zb2nv\") pod \"keystone-bootstrap-fwr22\" (UID: \"b0c3fb8f-0aab-4e51-bfa0-50e905479f77\") " pod="openstack/keystone-bootstrap-fwr22" Nov 24 11:46:40 crc kubenswrapper[4789]: I1124 11:46:40.604171 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b0c3fb8f-0aab-4e51-bfa0-50e905479f77-credential-keys\") pod \"keystone-bootstrap-fwr22\" (UID: \"b0c3fb8f-0aab-4e51-bfa0-50e905479f77\") " pod="openstack/keystone-bootstrap-fwr22" Nov 24 11:46:40 crc kubenswrapper[4789]: I1124 11:46:40.604194 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0c3fb8f-0aab-4e51-bfa0-50e905479f77-combined-ca-bundle\") pod \"keystone-bootstrap-fwr22\" (UID: \"b0c3fb8f-0aab-4e51-bfa0-50e905479f77\") " pod="openstack/keystone-bootstrap-fwr22" Nov 24 11:46:40 crc kubenswrapper[4789]: I1124 11:46:40.619094 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b0c3fb8f-0aab-4e51-bfa0-50e905479f77-credential-keys\") pod \"keystone-bootstrap-fwr22\" (UID: \"b0c3fb8f-0aab-4e51-bfa0-50e905479f77\") " pod="openstack/keystone-bootstrap-fwr22" Nov 24 11:46:40 crc kubenswrapper[4789]: I1124 11:46:40.630355 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0c3fb8f-0aab-4e51-bfa0-50e905479f77-combined-ca-bundle\") pod \"keystone-bootstrap-fwr22\" (UID: \"b0c3fb8f-0aab-4e51-bfa0-50e905479f77\") " pod="openstack/keystone-bootstrap-fwr22" Nov 24 11:46:40 crc kubenswrapper[4789]: I1124 11:46:40.631610 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0c3fb8f-0aab-4e51-bfa0-50e905479f77-config-data\") pod \"keystone-bootstrap-fwr22\" (UID: \"b0c3fb8f-0aab-4e51-bfa0-50e905479f77\") " pod="openstack/keystone-bootstrap-fwr22" Nov 24 11:46:40 crc kubenswrapper[4789]: I1124 11:46:40.634445 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0c3fb8f-0aab-4e51-bfa0-50e905479f77-scripts\") pod \"keystone-bootstrap-fwr22\" (UID: \"b0c3fb8f-0aab-4e51-bfa0-50e905479f77\") " pod="openstack/keystone-bootstrap-fwr22" Nov 24 11:46:40 crc kubenswrapper[4789]: I1124 11:46:40.635129 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb2nv\" (UniqueName: \"kubernetes.io/projected/b0c3fb8f-0aab-4e51-bfa0-50e905479f77-kube-api-access-zb2nv\") pod \"keystone-bootstrap-fwr22\" (UID: \"b0c3fb8f-0aab-4e51-bfa0-50e905479f77\") " pod="openstack/keystone-bootstrap-fwr22" Nov 24 11:46:40 crc kubenswrapper[4789]: I1124 11:46:40.638345 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b0c3fb8f-0aab-4e51-bfa0-50e905479f77-fernet-keys\") pod \"keystone-bootstrap-fwr22\" (UID: \"b0c3fb8f-0aab-4e51-bfa0-50e905479f77\") " pod="openstack/keystone-bootstrap-fwr22" Nov 24 11:46:40 crc kubenswrapper[4789]: I1124 11:46:40.704229 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fwr22" Nov 24 11:46:42 crc kubenswrapper[4789]: I1124 11:46:42.178767 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d057fecf-b22d-4304-9ce4-4fbbd358ecc5" path="/var/lib/kubelet/pods/d057fecf-b22d-4304-9ce4-4fbbd358ecc5/volumes" Nov 24 11:46:43 crc kubenswrapper[4789]: I1124 11:46:43.561614 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b6dbdb6f5-6bfk2" Nov 24 11:46:43 crc kubenswrapper[4789]: I1124 11:46:43.624585 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-2mcd8"] Nov 24 11:46:43 crc kubenswrapper[4789]: I1124 11:46:43.624969 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-554567b4f7-2mcd8" podUID="b2dbeaf7-abf7-4d60-a27f-e60b91597b44" containerName="dnsmasq-dns" containerID="cri-o://5377bf93bbcf30611581a21ec01c42b0fd1c463c51d24ff0155e87586e5c76e5" gracePeriod=10 Nov 24 11:46:44 crc kubenswrapper[4789]: I1124 11:46:44.160476 4789 generic.go:334] "Generic (PLEG): container finished" podID="b2dbeaf7-abf7-4d60-a27f-e60b91597b44" containerID="5377bf93bbcf30611581a21ec01c42b0fd1c463c51d24ff0155e87586e5c76e5" exitCode=0 Nov 24 11:46:44 crc kubenswrapper[4789]: I1124 11:46:44.160768 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-2mcd8" event={"ID":"b2dbeaf7-abf7-4d60-a27f-e60b91597b44","Type":"ContainerDied","Data":"5377bf93bbcf30611581a21ec01c42b0fd1c463c51d24ff0155e87586e5c76e5"} Nov 24 11:46:47 crc kubenswrapper[4789]: I1124 11:46:47.434736 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-554567b4f7-2mcd8" podUID="b2dbeaf7-abf7-4d60-a27f-e60b91597b44" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: connect: connection refused" Nov 24 11:46:50 crc kubenswrapper[4789]: I1124 11:46:50.212072 4789 generic.go:334] "Generic (PLEG): container finished" podID="7ce66a07-c046-4c6c-b5a5-443818f1b5db" containerID="326d01aed54a27faad41244ea6c18159d3da2e453337a0d01eff0fbbb474da84" exitCode=0 Nov 24 11:46:50 crc kubenswrapper[4789]: I1124 11:46:50.212200 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-7s7v7" event={"ID":"7ce66a07-c046-4c6c-b5a5-443818f1b5db","Type":"ContainerDied","Data":"326d01aed54a27faad41244ea6c18159d3da2e453337a0d01eff0fbbb474da84"} Nov 24 11:46:52 crc kubenswrapper[4789]: I1124 11:46:52.435095 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-554567b4f7-2mcd8" podUID="b2dbeaf7-abf7-4d60-a27f-e60b91597b44" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: connect: connection refused" Nov 24 11:46:53 crc kubenswrapper[4789]: I1124 11:46:53.141349 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-7s7v7" Nov 24 11:46:53 crc kubenswrapper[4789]: I1124 11:46:53.239378 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-7s7v7" event={"ID":"7ce66a07-c046-4c6c-b5a5-443818f1b5db","Type":"ContainerDied","Data":"812e9a6d07f15c65f265d8f9a1a84ec0b28980ee881cbf2502ce0a4838bd159f"} Nov 24 11:46:53 crc kubenswrapper[4789]: I1124 11:46:53.239426 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="812e9a6d07f15c65f265d8f9a1a84ec0b28980ee881cbf2502ce0a4838bd159f" Nov 24 11:46:53 crc kubenswrapper[4789]: I1124 11:46:53.239534 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-7s7v7" Nov 24 11:46:53 crc kubenswrapper[4789]: I1124 11:46:53.244054 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmrf7\" (UniqueName: \"kubernetes.io/projected/7ce66a07-c046-4c6c-b5a5-443818f1b5db-kube-api-access-wmrf7\") pod \"7ce66a07-c046-4c6c-b5a5-443818f1b5db\" (UID: \"7ce66a07-c046-4c6c-b5a5-443818f1b5db\") " Nov 24 11:46:53 crc kubenswrapper[4789]: I1124 11:46:53.244332 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce66a07-c046-4c6c-b5a5-443818f1b5db-combined-ca-bundle\") pod \"7ce66a07-c046-4c6c-b5a5-443818f1b5db\" (UID: \"7ce66a07-c046-4c6c-b5a5-443818f1b5db\") " Nov 24 11:46:53 crc kubenswrapper[4789]: I1124 11:46:53.244478 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7ce66a07-c046-4c6c-b5a5-443818f1b5db-config\") pod \"7ce66a07-c046-4c6c-b5a5-443818f1b5db\" (UID: \"7ce66a07-c046-4c6c-b5a5-443818f1b5db\") " Nov 24 11:46:53 crc kubenswrapper[4789]: I1124 11:46:53.252360 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ce66a07-c046-4c6c-b5a5-443818f1b5db-kube-api-access-wmrf7" (OuterVolumeSpecName: "kube-api-access-wmrf7") pod "7ce66a07-c046-4c6c-b5a5-443818f1b5db" (UID: "7ce66a07-c046-4c6c-b5a5-443818f1b5db"). InnerVolumeSpecName "kube-api-access-wmrf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:46:53 crc kubenswrapper[4789]: I1124 11:46:53.274202 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce66a07-c046-4c6c-b5a5-443818f1b5db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ce66a07-c046-4c6c-b5a5-443818f1b5db" (UID: "7ce66a07-c046-4c6c-b5a5-443818f1b5db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:46:53 crc kubenswrapper[4789]: I1124 11:46:53.297399 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce66a07-c046-4c6c-b5a5-443818f1b5db-config" (OuterVolumeSpecName: "config") pod "7ce66a07-c046-4c6c-b5a5-443818f1b5db" (UID: "7ce66a07-c046-4c6c-b5a5-443818f1b5db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:46:53 crc kubenswrapper[4789]: I1124 11:46:53.346020 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce66a07-c046-4c6c-b5a5-443818f1b5db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:53 crc kubenswrapper[4789]: I1124 11:46:53.346061 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/7ce66a07-c046-4c6c-b5a5-443818f1b5db-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:53 crc kubenswrapper[4789]: I1124 11:46:53.346072 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmrf7\" (UniqueName: \"kubernetes.io/projected/7ce66a07-c046-4c6c-b5a5-443818f1b5db-kube-api-access-wmrf7\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.423162 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-2mcd8" Nov 24 11:46:54 crc kubenswrapper[4789]: E1124 11:46:54.427562 4789 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Nov 24 11:46:54 crc kubenswrapper[4789]: E1124 11:46:54.428066 4789 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s9mc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-msb22_openstack(2e41ad3b-8d25-49db-8c15-4a3a57f47e2f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:46:54 crc kubenswrapper[4789]: E1124 11:46:54.431305 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-msb22" podUID="2e41ad3b-8d25-49db-8c15-4a3a57f47e2f" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.443427 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-zr4gs"] Nov 24 11:46:54 crc kubenswrapper[4789]: E1124 11:46:54.443800 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2dbeaf7-abf7-4d60-a27f-e60b91597b44" containerName="dnsmasq-dns" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.443815 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2dbeaf7-abf7-4d60-a27f-e60b91597b44" containerName="dnsmasq-dns" Nov 24 11:46:54 crc kubenswrapper[4789]: E1124 11:46:54.443823 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce66a07-c046-4c6c-b5a5-443818f1b5db" containerName="neutron-db-sync" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.443828 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce66a07-c046-4c6c-b5a5-443818f1b5db" containerName="neutron-db-sync" Nov 24 11:46:54 crc kubenswrapper[4789]: E1124 11:46:54.443845 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2dbeaf7-abf7-4d60-a27f-e60b91597b44" containerName="init" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.443851 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2dbeaf7-abf7-4d60-a27f-e60b91597b44" containerName="init" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.443995 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ce66a07-c046-4c6c-b5a5-443818f1b5db" containerName="neutron-db-sync" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.444015 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2dbeaf7-abf7-4d60-a27f-e60b91597b44" containerName="dnsmasq-dns" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.445622 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f66db59b9-zr4gs" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.450507 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-zr4gs"] Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.579480 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2dbeaf7-abf7-4d60-a27f-e60b91597b44-dns-svc\") pod \"b2dbeaf7-abf7-4d60-a27f-e60b91597b44\" (UID: \"b2dbeaf7-abf7-4d60-a27f-e60b91597b44\") " Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.579821 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2dbeaf7-abf7-4d60-a27f-e60b91597b44-ovsdbserver-sb\") pod \"b2dbeaf7-abf7-4d60-a27f-e60b91597b44\" (UID: \"b2dbeaf7-abf7-4d60-a27f-e60b91597b44\") " Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.579868 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtg2n\" (UniqueName: \"kubernetes.io/projected/b2dbeaf7-abf7-4d60-a27f-e60b91597b44-kube-api-access-gtg2n\") pod \"b2dbeaf7-abf7-4d60-a27f-e60b91597b44\" (UID: \"b2dbeaf7-abf7-4d60-a27f-e60b91597b44\") " Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.579938 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2dbeaf7-abf7-4d60-a27f-e60b91597b44-ovsdbserver-nb\") pod \"b2dbeaf7-abf7-4d60-a27f-e60b91597b44\" (UID: \"b2dbeaf7-abf7-4d60-a27f-e60b91597b44\") " Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.580033 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2dbeaf7-abf7-4d60-a27f-e60b91597b44-config\") pod \"b2dbeaf7-abf7-4d60-a27f-e60b91597b44\" (UID: \"b2dbeaf7-abf7-4d60-a27f-e60b91597b44\") " Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.586655 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b-ovsdbserver-nb\") pod \"dnsmasq-dns-5f66db59b9-zr4gs\" (UID: \"cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b\") " pod="openstack/dnsmasq-dns-5f66db59b9-zr4gs" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.586697 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b-ovsdbserver-sb\") pod \"dnsmasq-dns-5f66db59b9-zr4gs\" (UID: \"cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b\") " pod="openstack/dnsmasq-dns-5f66db59b9-zr4gs" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.586744 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b-dns-svc\") pod \"dnsmasq-dns-5f66db59b9-zr4gs\" (UID: \"cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b\") " pod="openstack/dnsmasq-dns-5f66db59b9-zr4gs" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.586782 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b-config\") pod \"dnsmasq-dns-5f66db59b9-zr4gs\" (UID: \"cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b\") " pod="openstack/dnsmasq-dns-5f66db59b9-zr4gs" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.586836 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f6n6\" (UniqueName: \"kubernetes.io/projected/cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b-kube-api-access-2f6n6\") pod \"dnsmasq-dns-5f66db59b9-zr4gs\" (UID: \"cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b\") " pod="openstack/dnsmasq-dns-5f66db59b9-zr4gs" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.619786 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2dbeaf7-abf7-4d60-a27f-e60b91597b44-kube-api-access-gtg2n" (OuterVolumeSpecName: "kube-api-access-gtg2n") pod "b2dbeaf7-abf7-4d60-a27f-e60b91597b44" (UID: "b2dbeaf7-abf7-4d60-a27f-e60b91597b44"). InnerVolumeSpecName "kube-api-access-gtg2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.626129 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-f8c9d6bfb-grt9w"] Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.628010 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f8c9d6bfb-grt9w" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.631991 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-vwb4m" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.632194 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.632303 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.632400 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.639009 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f8c9d6bfb-grt9w"] Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.688792 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b-ovsdbserver-nb\") pod \"dnsmasq-dns-5f66db59b9-zr4gs\" (UID: \"cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b\") " pod="openstack/dnsmasq-dns-5f66db59b9-zr4gs" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.688940 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b-ovsdbserver-sb\") pod \"dnsmasq-dns-5f66db59b9-zr4gs\" (UID: \"cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b\") " pod="openstack/dnsmasq-dns-5f66db59b9-zr4gs" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.689049 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b-dns-svc\") pod \"dnsmasq-dns-5f66db59b9-zr4gs\" (UID: \"cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b\") " pod="openstack/dnsmasq-dns-5f66db59b9-zr4gs" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.689134 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b-config\") pod \"dnsmasq-dns-5f66db59b9-zr4gs\" (UID: \"cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b\") " pod="openstack/dnsmasq-dns-5f66db59b9-zr4gs" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.689216 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f6n6\" (UniqueName: \"kubernetes.io/projected/cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b-kube-api-access-2f6n6\") pod \"dnsmasq-dns-5f66db59b9-zr4gs\" (UID: \"cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b\") " pod="openstack/dnsmasq-dns-5f66db59b9-zr4gs" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.689331 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtg2n\" (UniqueName: \"kubernetes.io/projected/b2dbeaf7-abf7-4d60-a27f-e60b91597b44-kube-api-access-gtg2n\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.690781 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b-ovsdbserver-sb\") pod \"dnsmasq-dns-5f66db59b9-zr4gs\" (UID: \"cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b\") " pod="openstack/dnsmasq-dns-5f66db59b9-zr4gs" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.691679 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b-dns-svc\") pod \"dnsmasq-dns-5f66db59b9-zr4gs\" (UID: \"cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b\") " pod="openstack/dnsmasq-dns-5f66db59b9-zr4gs" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.692795 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b-ovsdbserver-nb\") pod \"dnsmasq-dns-5f66db59b9-zr4gs\" (UID: \"cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b\") " pod="openstack/dnsmasq-dns-5f66db59b9-zr4gs" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.693124 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b-config\") pod \"dnsmasq-dns-5f66db59b9-zr4gs\" (UID: \"cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b\") " pod="openstack/dnsmasq-dns-5f66db59b9-zr4gs" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.720494 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f6n6\" (UniqueName: \"kubernetes.io/projected/cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b-kube-api-access-2f6n6\") pod \"dnsmasq-dns-5f66db59b9-zr4gs\" (UID: \"cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b\") " pod="openstack/dnsmasq-dns-5f66db59b9-zr4gs" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.733861 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2dbeaf7-abf7-4d60-a27f-e60b91597b44-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b2dbeaf7-abf7-4d60-a27f-e60b91597b44" (UID: "b2dbeaf7-abf7-4d60-a27f-e60b91597b44"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.741262 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2dbeaf7-abf7-4d60-a27f-e60b91597b44-config" (OuterVolumeSpecName: "config") pod "b2dbeaf7-abf7-4d60-a27f-e60b91597b44" (UID: "b2dbeaf7-abf7-4d60-a27f-e60b91597b44"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.744630 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2dbeaf7-abf7-4d60-a27f-e60b91597b44-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b2dbeaf7-abf7-4d60-a27f-e60b91597b44" (UID: "b2dbeaf7-abf7-4d60-a27f-e60b91597b44"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.773589 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f66db59b9-zr4gs" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.792348 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cgfh\" (UniqueName: \"kubernetes.io/projected/a0a5ba08-77d3-4c41-b6b0-5efd19c469fe-kube-api-access-7cgfh\") pod \"neutron-f8c9d6bfb-grt9w\" (UID: \"a0a5ba08-77d3-4c41-b6b0-5efd19c469fe\") " pod="openstack/neutron-f8c9d6bfb-grt9w" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.793608 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0a5ba08-77d3-4c41-b6b0-5efd19c469fe-ovndb-tls-certs\") pod \"neutron-f8c9d6bfb-grt9w\" (UID: \"a0a5ba08-77d3-4c41-b6b0-5efd19c469fe\") " pod="openstack/neutron-f8c9d6bfb-grt9w" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.794507 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a0a5ba08-77d3-4c41-b6b0-5efd19c469fe-config\") pod \"neutron-f8c9d6bfb-grt9w\" (UID: \"a0a5ba08-77d3-4c41-b6b0-5efd19c469fe\") " pod="openstack/neutron-f8c9d6bfb-grt9w" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.794579 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0a5ba08-77d3-4c41-b6b0-5efd19c469fe-combined-ca-bundle\") pod \"neutron-f8c9d6bfb-grt9w\" (UID: \"a0a5ba08-77d3-4c41-b6b0-5efd19c469fe\") " pod="openstack/neutron-f8c9d6bfb-grt9w" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.794633 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a0a5ba08-77d3-4c41-b6b0-5efd19c469fe-httpd-config\") pod \"neutron-f8c9d6bfb-grt9w\" (UID: \"a0a5ba08-77d3-4c41-b6b0-5efd19c469fe\") " pod="openstack/neutron-f8c9d6bfb-grt9w" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.794737 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2dbeaf7-abf7-4d60-a27f-e60b91597b44-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.794754 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2dbeaf7-abf7-4d60-a27f-e60b91597b44-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.794785 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2dbeaf7-abf7-4d60-a27f-e60b91597b44-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.889334 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2dbeaf7-abf7-4d60-a27f-e60b91597b44-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b2dbeaf7-abf7-4d60-a27f-e60b91597b44" (UID: "b2dbeaf7-abf7-4d60-a27f-e60b91597b44"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.896096 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cgfh\" (UniqueName: \"kubernetes.io/projected/a0a5ba08-77d3-4c41-b6b0-5efd19c469fe-kube-api-access-7cgfh\") pod \"neutron-f8c9d6bfb-grt9w\" (UID: \"a0a5ba08-77d3-4c41-b6b0-5efd19c469fe\") " pod="openstack/neutron-f8c9d6bfb-grt9w" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.896266 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0a5ba08-77d3-4c41-b6b0-5efd19c469fe-ovndb-tls-certs\") pod \"neutron-f8c9d6bfb-grt9w\" (UID: \"a0a5ba08-77d3-4c41-b6b0-5efd19c469fe\") " pod="openstack/neutron-f8c9d6bfb-grt9w" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.896311 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a0a5ba08-77d3-4c41-b6b0-5efd19c469fe-config\") pod \"neutron-f8c9d6bfb-grt9w\" (UID: \"a0a5ba08-77d3-4c41-b6b0-5efd19c469fe\") " pod="openstack/neutron-f8c9d6bfb-grt9w" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.896344 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0a5ba08-77d3-4c41-b6b0-5efd19c469fe-combined-ca-bundle\") pod \"neutron-f8c9d6bfb-grt9w\" (UID: \"a0a5ba08-77d3-4c41-b6b0-5efd19c469fe\") " pod="openstack/neutron-f8c9d6bfb-grt9w" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.896374 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a0a5ba08-77d3-4c41-b6b0-5efd19c469fe-httpd-config\") pod \"neutron-f8c9d6bfb-grt9w\" (UID: \"a0a5ba08-77d3-4c41-b6b0-5efd19c469fe\") " pod="openstack/neutron-f8c9d6bfb-grt9w" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.896433 4789 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2dbeaf7-abf7-4d60-a27f-e60b91597b44-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.905971 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a0a5ba08-77d3-4c41-b6b0-5efd19c469fe-config\") pod \"neutron-f8c9d6bfb-grt9w\" (UID: \"a0a5ba08-77d3-4c41-b6b0-5efd19c469fe\") " pod="openstack/neutron-f8c9d6bfb-grt9w" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.907752 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0a5ba08-77d3-4c41-b6b0-5efd19c469fe-combined-ca-bundle\") pod \"neutron-f8c9d6bfb-grt9w\" (UID: \"a0a5ba08-77d3-4c41-b6b0-5efd19c469fe\") " pod="openstack/neutron-f8c9d6bfb-grt9w" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.909700 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0a5ba08-77d3-4c41-b6b0-5efd19c469fe-ovndb-tls-certs\") pod \"neutron-f8c9d6bfb-grt9w\" (UID: \"a0a5ba08-77d3-4c41-b6b0-5efd19c469fe\") " pod="openstack/neutron-f8c9d6bfb-grt9w" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.912918 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a0a5ba08-77d3-4c41-b6b0-5efd19c469fe-httpd-config\") pod \"neutron-f8c9d6bfb-grt9w\" (UID: \"a0a5ba08-77d3-4c41-b6b0-5efd19c469fe\") " pod="openstack/neutron-f8c9d6bfb-grt9w" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.930278 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cgfh\" (UniqueName: \"kubernetes.io/projected/a0a5ba08-77d3-4c41-b6b0-5efd19c469fe-kube-api-access-7cgfh\") pod \"neutron-f8c9d6bfb-grt9w\" (UID: \"a0a5ba08-77d3-4c41-b6b0-5efd19c469fe\") " pod="openstack/neutron-f8c9d6bfb-grt9w" Nov 24 11:46:54 crc kubenswrapper[4789]: I1124 11:46:54.973537 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-fwr22"] Nov 24 11:46:55 crc kubenswrapper[4789]: I1124 11:46:55.040084 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f8c9d6bfb-grt9w" Nov 24 11:46:55 crc kubenswrapper[4789]: I1124 11:46:55.319549 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fwr22" event={"ID":"b0c3fb8f-0aab-4e51-bfa0-50e905479f77","Type":"ContainerStarted","Data":"09fb29e690ccc7728a0c2f511a01dc0f0121b504660df2b743b4b84795e8fd8b"} Nov 24 11:46:55 crc kubenswrapper[4789]: I1124 11:46:55.319825 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fwr22" event={"ID":"b0c3fb8f-0aab-4e51-bfa0-50e905479f77","Type":"ContainerStarted","Data":"ae1446f8d8db6c29dc2d33f4c7fd63f71169e1b0d098340ea2997831a8d27d55"} Nov 24 11:46:55 crc kubenswrapper[4789]: I1124 11:46:55.325816 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c87d408-bf3b-4156-9116-110b948e3ead","Type":"ContainerStarted","Data":"eda763e5b5d63022d9cf290c856050412b0e91487174fd25f8c1b5bb1ee3dc10"} Nov 24 11:46:55 crc kubenswrapper[4789]: I1124 11:46:55.328665 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-mvgg8" event={"ID":"bf547f01-0021-4f93-ae9b-a7afa5016c6a","Type":"ContainerStarted","Data":"ab1c66e1538c230613aada80e9b75be0d893f252c28efd5e97e10f7f2eb347ce"} Nov 24 11:46:55 crc kubenswrapper[4789]: I1124 11:46:55.348834 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-fwr22" podStartSLOduration=15.348819481 podStartE2EDuration="15.348819481s" podCreationTimestamp="2025-11-24 11:46:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:46:55.348407562 +0000 UTC m=+997.930878941" watchObservedRunningTime="2025-11-24 11:46:55.348819481 +0000 UTC m=+997.931290860" Nov 24 11:46:55 crc kubenswrapper[4789]: I1124 11:46:55.349636 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-gn9zx" event={"ID":"ad19529b-59a5-42f3-8adf-ba14978e1f8a","Type":"ContainerStarted","Data":"3a5f734314a67825f0218cc23490a22234ef30c531f146bda5b5f972ce330a55"} Nov 24 11:46:55 crc kubenswrapper[4789]: I1124 11:46:55.359218 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-2mcd8" Nov 24 11:46:55 crc kubenswrapper[4789]: I1124 11:46:55.359706 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-2mcd8" event={"ID":"b2dbeaf7-abf7-4d60-a27f-e60b91597b44","Type":"ContainerDied","Data":"c61e9fc1181257d9524e694e1c8bc1e819b8735dda5ace09b80b1ac3e8dc4910"} Nov 24 11:46:55 crc kubenswrapper[4789]: I1124 11:46:55.359760 4789 scope.go:117] "RemoveContainer" containerID="5377bf93bbcf30611581a21ec01c42b0fd1c463c51d24ff0155e87586e5c76e5" Nov 24 11:46:55 crc kubenswrapper[4789]: E1124 11:46:55.363725 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-msb22" podUID="2e41ad3b-8d25-49db-8c15-4a3a57f47e2f" Nov 24 11:46:55 crc kubenswrapper[4789]: I1124 11:46:55.386302 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-mvgg8" podStartSLOduration=2.861541522 podStartE2EDuration="23.386280478s" podCreationTimestamp="2025-11-24 11:46:32 +0000 UTC" firstStartedPulling="2025-11-24 11:46:33.771066181 +0000 UTC m=+976.353537560" lastFinishedPulling="2025-11-24 11:46:54.295805127 +0000 UTC m=+996.878276516" observedRunningTime="2025-11-24 11:46:55.359415988 +0000 UTC m=+997.941887367" watchObservedRunningTime="2025-11-24 11:46:55.386280478 +0000 UTC m=+997.968751857" Nov 24 11:46:55 crc kubenswrapper[4789]: I1124 11:46:55.388525 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-zr4gs"] Nov 24 11:46:55 crc kubenswrapper[4789]: W1124 11:46:55.409567 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf6a1ec5_8f3b_48ef_ba4a_ea43df54993b.slice/crio-b8ea22b515f29623ff7347962a9e62a301c6e7af012fb0b3996b5b84b244d725 WatchSource:0}: Error finding container b8ea22b515f29623ff7347962a9e62a301c6e7af012fb0b3996b5b84b244d725: Status 404 returned error can't find the container with id b8ea22b515f29623ff7347962a9e62a301c6e7af012fb0b3996b5b84b244d725 Nov 24 11:46:55 crc kubenswrapper[4789]: I1124 11:46:55.459565 4789 scope.go:117] "RemoveContainer" containerID="85d93876c6ae5d5c0f07793b37a6aed37075742b066c1e8e08debe771caba1d5" Nov 24 11:46:55 crc kubenswrapper[4789]: I1124 11:46:55.468971 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-gn9zx" podStartSLOduration=3.261581857 podStartE2EDuration="23.46895386s" podCreationTimestamp="2025-11-24 11:46:32 +0000 UTC" firstStartedPulling="2025-11-24 11:46:34.11176757 +0000 UTC m=+976.694238939" lastFinishedPulling="2025-11-24 11:46:54.319139553 +0000 UTC m=+996.901610942" observedRunningTime="2025-11-24 11:46:55.437593221 +0000 UTC m=+998.020064610" watchObservedRunningTime="2025-11-24 11:46:55.46895386 +0000 UTC m=+998.051425239" Nov 24 11:46:55 crc kubenswrapper[4789]: I1124 11:46:55.470569 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-2mcd8"] Nov 24 11:46:55 crc kubenswrapper[4789]: I1124 11:46:55.495166 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-2mcd8"] Nov 24 11:46:55 crc kubenswrapper[4789]: I1124 11:46:55.626438 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f8c9d6bfb-grt9w"] Nov 24 11:46:56 crc kubenswrapper[4789]: I1124 11:46:56.189308 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2dbeaf7-abf7-4d60-a27f-e60b91597b44" path="/var/lib/kubelet/pods/b2dbeaf7-abf7-4d60-a27f-e60b91597b44/volumes" Nov 24 11:46:56 crc kubenswrapper[4789]: I1124 11:46:56.367192 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f8c9d6bfb-grt9w" event={"ID":"a0a5ba08-77d3-4c41-b6b0-5efd19c469fe","Type":"ContainerStarted","Data":"98fa3a21acad22e8c4c3803a0becf30981b25659b3876be43f2dad4ce79d615d"} Nov 24 11:46:56 crc kubenswrapper[4789]: I1124 11:46:56.367532 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f8c9d6bfb-grt9w" event={"ID":"a0a5ba08-77d3-4c41-b6b0-5efd19c469fe","Type":"ContainerStarted","Data":"bb32f3b4ddf429d5fba90a3afcd08a133d98739592534e9d48f50034b0bfa71a"} Nov 24 11:46:56 crc kubenswrapper[4789]: I1124 11:46:56.367542 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f8c9d6bfb-grt9w" event={"ID":"a0a5ba08-77d3-4c41-b6b0-5efd19c469fe","Type":"ContainerStarted","Data":"0a2b6561feee5c12a428795f1898503b5be52382e0c1ce1df0b6fb925d32e32c"} Nov 24 11:46:56 crc kubenswrapper[4789]: I1124 11:46:56.368630 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-f8c9d6bfb-grt9w" Nov 24 11:46:56 crc kubenswrapper[4789]: I1124 11:46:56.370726 4789 generic.go:334] "Generic (PLEG): container finished" podID="cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b" containerID="27cac019669893f9ca4c054dfeca345edaaabba2669cefc3aa4658aef8be8c9f" exitCode=0 Nov 24 11:46:56 crc kubenswrapper[4789]: I1124 11:46:56.371671 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-zr4gs" event={"ID":"cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b","Type":"ContainerDied","Data":"27cac019669893f9ca4c054dfeca345edaaabba2669cefc3aa4658aef8be8c9f"} Nov 24 11:46:56 crc kubenswrapper[4789]: I1124 11:46:56.371695 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-zr4gs" event={"ID":"cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b","Type":"ContainerStarted","Data":"b8ea22b515f29623ff7347962a9e62a301c6e7af012fb0b3996b5b84b244d725"} Nov 24 11:46:56 crc kubenswrapper[4789]: I1124 11:46:56.409369 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-f8c9d6bfb-grt9w" podStartSLOduration=2.409350248 podStartE2EDuration="2.409350248s" podCreationTimestamp="2025-11-24 11:46:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:46:56.409198114 +0000 UTC m=+998.991669493" watchObservedRunningTime="2025-11-24 11:46:56.409350248 +0000 UTC m=+998.991821627" Nov 24 11:46:57 crc kubenswrapper[4789]: I1124 11:46:57.310948 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-85c5468469-htqfg"] Nov 24 11:46:57 crc kubenswrapper[4789]: I1124 11:46:57.312782 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-85c5468469-htqfg" Nov 24 11:46:57 crc kubenswrapper[4789]: I1124 11:46:57.314401 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 24 11:46:57 crc kubenswrapper[4789]: I1124 11:46:57.316061 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 24 11:46:57 crc kubenswrapper[4789]: I1124 11:46:57.348699 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-85c5468469-htqfg"] Nov 24 11:46:57 crc kubenswrapper[4789]: I1124 11:46:57.421639 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-zr4gs" event={"ID":"cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b","Type":"ContainerStarted","Data":"d2c5daf18048616e070af03b7ea9db79794daa5581f4511463b00e63b3f98633"} Nov 24 11:46:57 crc kubenswrapper[4789]: I1124 11:46:57.442754 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2e0e6a2-b3ea-478b-b836-c20f7962266c-ovndb-tls-certs\") pod \"neutron-85c5468469-htqfg\" (UID: \"f2e0e6a2-b3ea-478b-b836-c20f7962266c\") " pod="openstack/neutron-85c5468469-htqfg" Nov 24 11:46:57 crc kubenswrapper[4789]: I1124 11:46:57.442824 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6j5r\" (UniqueName: \"kubernetes.io/projected/f2e0e6a2-b3ea-478b-b836-c20f7962266c-kube-api-access-s6j5r\") pod \"neutron-85c5468469-htqfg\" (UID: \"f2e0e6a2-b3ea-478b-b836-c20f7962266c\") " pod="openstack/neutron-85c5468469-htqfg" Nov 24 11:46:57 crc kubenswrapper[4789]: I1124 11:46:57.442856 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f2e0e6a2-b3ea-478b-b836-c20f7962266c-httpd-config\") pod \"neutron-85c5468469-htqfg\" (UID: \"f2e0e6a2-b3ea-478b-b836-c20f7962266c\") " pod="openstack/neutron-85c5468469-htqfg" Nov 24 11:46:57 crc kubenswrapper[4789]: I1124 11:46:57.442898 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2e0e6a2-b3ea-478b-b836-c20f7962266c-public-tls-certs\") pod \"neutron-85c5468469-htqfg\" (UID: \"f2e0e6a2-b3ea-478b-b836-c20f7962266c\") " pod="openstack/neutron-85c5468469-htqfg" Nov 24 11:46:57 crc kubenswrapper[4789]: I1124 11:46:57.442920 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2e0e6a2-b3ea-478b-b836-c20f7962266c-internal-tls-certs\") pod \"neutron-85c5468469-htqfg\" (UID: \"f2e0e6a2-b3ea-478b-b836-c20f7962266c\") " pod="openstack/neutron-85c5468469-htqfg" Nov 24 11:46:57 crc kubenswrapper[4789]: I1124 11:46:57.443000 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f2e0e6a2-b3ea-478b-b836-c20f7962266c-config\") pod \"neutron-85c5468469-htqfg\" (UID: \"f2e0e6a2-b3ea-478b-b836-c20f7962266c\") " pod="openstack/neutron-85c5468469-htqfg" Nov 24 11:46:57 crc kubenswrapper[4789]: I1124 11:46:57.443019 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2e0e6a2-b3ea-478b-b836-c20f7962266c-combined-ca-bundle\") pod \"neutron-85c5468469-htqfg\" (UID: \"f2e0e6a2-b3ea-478b-b836-c20f7962266c\") " pod="openstack/neutron-85c5468469-htqfg" Nov 24 11:46:57 crc kubenswrapper[4789]: I1124 11:46:57.544274 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6j5r\" (UniqueName: \"kubernetes.io/projected/f2e0e6a2-b3ea-478b-b836-c20f7962266c-kube-api-access-s6j5r\") pod \"neutron-85c5468469-htqfg\" (UID: \"f2e0e6a2-b3ea-478b-b836-c20f7962266c\") " pod="openstack/neutron-85c5468469-htqfg" Nov 24 11:46:57 crc kubenswrapper[4789]: I1124 11:46:57.544325 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f2e0e6a2-b3ea-478b-b836-c20f7962266c-httpd-config\") pod \"neutron-85c5468469-htqfg\" (UID: \"f2e0e6a2-b3ea-478b-b836-c20f7962266c\") " pod="openstack/neutron-85c5468469-htqfg" Nov 24 11:46:57 crc kubenswrapper[4789]: I1124 11:46:57.544390 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2e0e6a2-b3ea-478b-b836-c20f7962266c-public-tls-certs\") pod \"neutron-85c5468469-htqfg\" (UID: \"f2e0e6a2-b3ea-478b-b836-c20f7962266c\") " pod="openstack/neutron-85c5468469-htqfg" Nov 24 11:46:57 crc kubenswrapper[4789]: I1124 11:46:57.544411 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2e0e6a2-b3ea-478b-b836-c20f7962266c-internal-tls-certs\") pod \"neutron-85c5468469-htqfg\" (UID: \"f2e0e6a2-b3ea-478b-b836-c20f7962266c\") " pod="openstack/neutron-85c5468469-htqfg" Nov 24 11:46:57 crc kubenswrapper[4789]: I1124 11:46:57.544501 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f2e0e6a2-b3ea-478b-b836-c20f7962266c-config\") pod \"neutron-85c5468469-htqfg\" (UID: \"f2e0e6a2-b3ea-478b-b836-c20f7962266c\") " pod="openstack/neutron-85c5468469-htqfg" Nov 24 11:46:57 crc kubenswrapper[4789]: I1124 11:46:57.544520 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2e0e6a2-b3ea-478b-b836-c20f7962266c-combined-ca-bundle\") pod \"neutron-85c5468469-htqfg\" (UID: \"f2e0e6a2-b3ea-478b-b836-c20f7962266c\") " pod="openstack/neutron-85c5468469-htqfg" Nov 24 11:46:57 crc kubenswrapper[4789]: I1124 11:46:57.544570 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2e0e6a2-b3ea-478b-b836-c20f7962266c-ovndb-tls-certs\") pod \"neutron-85c5468469-htqfg\" (UID: \"f2e0e6a2-b3ea-478b-b836-c20f7962266c\") " pod="openstack/neutron-85c5468469-htqfg" Nov 24 11:46:57 crc kubenswrapper[4789]: I1124 11:46:57.549157 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2e0e6a2-b3ea-478b-b836-c20f7962266c-public-tls-certs\") pod \"neutron-85c5468469-htqfg\" (UID: \"f2e0e6a2-b3ea-478b-b836-c20f7962266c\") " pod="openstack/neutron-85c5468469-htqfg" Nov 24 11:46:57 crc kubenswrapper[4789]: I1124 11:46:57.549518 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2e0e6a2-b3ea-478b-b836-c20f7962266c-internal-tls-certs\") pod \"neutron-85c5468469-htqfg\" (UID: \"f2e0e6a2-b3ea-478b-b836-c20f7962266c\") " pod="openstack/neutron-85c5468469-htqfg" Nov 24 11:46:57 crc kubenswrapper[4789]: I1124 11:46:57.549803 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2e0e6a2-b3ea-478b-b836-c20f7962266c-ovndb-tls-certs\") pod \"neutron-85c5468469-htqfg\" (UID: \"f2e0e6a2-b3ea-478b-b836-c20f7962266c\") " pod="openstack/neutron-85c5468469-htqfg" Nov 24 11:46:57 crc kubenswrapper[4789]: I1124 11:46:57.550752 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2e0e6a2-b3ea-478b-b836-c20f7962266c-combined-ca-bundle\") pod \"neutron-85c5468469-htqfg\" (UID: \"f2e0e6a2-b3ea-478b-b836-c20f7962266c\") " pod="openstack/neutron-85c5468469-htqfg" Nov 24 11:46:57 crc kubenswrapper[4789]: I1124 11:46:57.552421 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f2e0e6a2-b3ea-478b-b836-c20f7962266c-httpd-config\") pod \"neutron-85c5468469-htqfg\" (UID: \"f2e0e6a2-b3ea-478b-b836-c20f7962266c\") " pod="openstack/neutron-85c5468469-htqfg" Nov 24 11:46:57 crc kubenswrapper[4789]: I1124 11:46:57.566608 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f2e0e6a2-b3ea-478b-b836-c20f7962266c-config\") pod \"neutron-85c5468469-htqfg\" (UID: \"f2e0e6a2-b3ea-478b-b836-c20f7962266c\") " pod="openstack/neutron-85c5468469-htqfg" Nov 24 11:46:57 crc kubenswrapper[4789]: I1124 11:46:57.574173 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6j5r\" (UniqueName: \"kubernetes.io/projected/f2e0e6a2-b3ea-478b-b836-c20f7962266c-kube-api-access-s6j5r\") pod \"neutron-85c5468469-htqfg\" (UID: \"f2e0e6a2-b3ea-478b-b836-c20f7962266c\") " pod="openstack/neutron-85c5468469-htqfg" Nov 24 11:46:57 crc kubenswrapper[4789]: I1124 11:46:57.628250 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-85c5468469-htqfg" Nov 24 11:47:01 crc kubenswrapper[4789]: I1124 11:46:58.228701 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-85c5468469-htqfg"] Nov 24 11:47:01 crc kubenswrapper[4789]: W1124 11:47:00.443211 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2e0e6a2_b3ea_478b_b836_c20f7962266c.slice/crio-fa93622e6040ac46c28479849c6a09d5139950df2ca73fe9feaeddf0a21a593c WatchSource:0}: Error finding container fa93622e6040ac46c28479849c6a09d5139950df2ca73fe9feaeddf0a21a593c: Status 404 returned error can't find the container with id fa93622e6040ac46c28479849c6a09d5139950df2ca73fe9feaeddf0a21a593c Nov 24 11:47:01 crc kubenswrapper[4789]: I1124 11:47:00.455386 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f66db59b9-zr4gs" Nov 24 11:47:01 crc kubenswrapper[4789]: I1124 11:47:00.488761 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f66db59b9-zr4gs" podStartSLOduration=6.488729832 podStartE2EDuration="6.488729832s" podCreationTimestamp="2025-11-24 11:46:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:47:00.48368266 +0000 UTC m=+1003.066154159" watchObservedRunningTime="2025-11-24 11:47:00.488729832 +0000 UTC m=+1003.071201261" Nov 24 11:47:01 crc kubenswrapper[4789]: I1124 11:47:01.475077 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85c5468469-htqfg" event={"ID":"f2e0e6a2-b3ea-478b-b836-c20f7962266c","Type":"ContainerStarted","Data":"eab0c74d7eb714cdadb3447072bc3e3906e10d1c789bc04fd7355770f154b52d"} Nov 24 11:47:01 crc kubenswrapper[4789]: I1124 11:47:01.475638 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85c5468469-htqfg" event={"ID":"f2e0e6a2-b3ea-478b-b836-c20f7962266c","Type":"ContainerStarted","Data":"70c333642a64c4674e008945936495fe1b24a3337f8e7ad0e63a4e6817867ec4"} Nov 24 11:47:01 crc kubenswrapper[4789]: I1124 11:47:01.475656 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85c5468469-htqfg" event={"ID":"f2e0e6a2-b3ea-478b-b836-c20f7962266c","Type":"ContainerStarted","Data":"fa93622e6040ac46c28479849c6a09d5139950df2ca73fe9feaeddf0a21a593c"} Nov 24 11:47:01 crc kubenswrapper[4789]: I1124 11:47:01.475715 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-85c5468469-htqfg" Nov 24 11:47:01 crc kubenswrapper[4789]: I1124 11:47:01.479091 4789 generic.go:334] "Generic (PLEG): container finished" podID="b0c3fb8f-0aab-4e51-bfa0-50e905479f77" containerID="09fb29e690ccc7728a0c2f511a01dc0f0121b504660df2b743b4b84795e8fd8b" exitCode=0 Nov 24 11:47:01 crc kubenswrapper[4789]: I1124 11:47:01.479163 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fwr22" event={"ID":"b0c3fb8f-0aab-4e51-bfa0-50e905479f77","Type":"ContainerDied","Data":"09fb29e690ccc7728a0c2f511a01dc0f0121b504660df2b743b4b84795e8fd8b"} Nov 24 11:47:01 crc kubenswrapper[4789]: I1124 11:47:01.481574 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c87d408-bf3b-4156-9116-110b948e3ead","Type":"ContainerStarted","Data":"214bad3787b34574213e2dccf3e08dab06ed07d848e91b50f27319e37ebef65b"} Nov 24 11:47:01 crc kubenswrapper[4789]: I1124 11:47:01.509297 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-85c5468469-htqfg" podStartSLOduration=4.50927844 podStartE2EDuration="4.50927844s" podCreationTimestamp="2025-11-24 11:46:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:47:01.500240562 +0000 UTC m=+1004.082711961" watchObservedRunningTime="2025-11-24 11:47:01.50927844 +0000 UTC m=+1004.091749819" Nov 24 11:47:02 crc kubenswrapper[4789]: I1124 11:47:02.855418 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fwr22" Nov 24 11:47:02 crc kubenswrapper[4789]: I1124 11:47:02.941321 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b0c3fb8f-0aab-4e51-bfa0-50e905479f77-credential-keys\") pod \"b0c3fb8f-0aab-4e51-bfa0-50e905479f77\" (UID: \"b0c3fb8f-0aab-4e51-bfa0-50e905479f77\") " Nov 24 11:47:02 crc kubenswrapper[4789]: I1124 11:47:02.941511 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0c3fb8f-0aab-4e51-bfa0-50e905479f77-combined-ca-bundle\") pod \"b0c3fb8f-0aab-4e51-bfa0-50e905479f77\" (UID: \"b0c3fb8f-0aab-4e51-bfa0-50e905479f77\") " Nov 24 11:47:02 crc kubenswrapper[4789]: I1124 11:47:02.941591 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0c3fb8f-0aab-4e51-bfa0-50e905479f77-config-data\") pod \"b0c3fb8f-0aab-4e51-bfa0-50e905479f77\" (UID: \"b0c3fb8f-0aab-4e51-bfa0-50e905479f77\") " Nov 24 11:47:02 crc kubenswrapper[4789]: I1124 11:47:02.941613 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zb2nv\" (UniqueName: \"kubernetes.io/projected/b0c3fb8f-0aab-4e51-bfa0-50e905479f77-kube-api-access-zb2nv\") pod \"b0c3fb8f-0aab-4e51-bfa0-50e905479f77\" (UID: \"b0c3fb8f-0aab-4e51-bfa0-50e905479f77\") " Nov 24 11:47:02 crc kubenswrapper[4789]: I1124 11:47:02.941654 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b0c3fb8f-0aab-4e51-bfa0-50e905479f77-fernet-keys\") pod \"b0c3fb8f-0aab-4e51-bfa0-50e905479f77\" (UID: \"b0c3fb8f-0aab-4e51-bfa0-50e905479f77\") " Nov 24 11:47:02 crc kubenswrapper[4789]: I1124 11:47:02.941806 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0c3fb8f-0aab-4e51-bfa0-50e905479f77-scripts\") pod \"b0c3fb8f-0aab-4e51-bfa0-50e905479f77\" (UID: \"b0c3fb8f-0aab-4e51-bfa0-50e905479f77\") " Nov 24 11:47:02 crc kubenswrapper[4789]: I1124 11:47:02.949028 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0c3fb8f-0aab-4e51-bfa0-50e905479f77-kube-api-access-zb2nv" (OuterVolumeSpecName: "kube-api-access-zb2nv") pod "b0c3fb8f-0aab-4e51-bfa0-50e905479f77" (UID: "b0c3fb8f-0aab-4e51-bfa0-50e905479f77"). InnerVolumeSpecName "kube-api-access-zb2nv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:47:02 crc kubenswrapper[4789]: I1124 11:47:02.949116 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0c3fb8f-0aab-4e51-bfa0-50e905479f77-scripts" (OuterVolumeSpecName: "scripts") pod "b0c3fb8f-0aab-4e51-bfa0-50e905479f77" (UID: "b0c3fb8f-0aab-4e51-bfa0-50e905479f77"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:02 crc kubenswrapper[4789]: I1124 11:47:02.957240 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0c3fb8f-0aab-4e51-bfa0-50e905479f77-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b0c3fb8f-0aab-4e51-bfa0-50e905479f77" (UID: "b0c3fb8f-0aab-4e51-bfa0-50e905479f77"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:02 crc kubenswrapper[4789]: I1124 11:47:02.959673 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0c3fb8f-0aab-4e51-bfa0-50e905479f77-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b0c3fb8f-0aab-4e51-bfa0-50e905479f77" (UID: "b0c3fb8f-0aab-4e51-bfa0-50e905479f77"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:02 crc kubenswrapper[4789]: I1124 11:47:02.980049 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0c3fb8f-0aab-4e51-bfa0-50e905479f77-config-data" (OuterVolumeSpecName: "config-data") pod "b0c3fb8f-0aab-4e51-bfa0-50e905479f77" (UID: "b0c3fb8f-0aab-4e51-bfa0-50e905479f77"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:02 crc kubenswrapper[4789]: I1124 11:47:02.989611 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0c3fb8f-0aab-4e51-bfa0-50e905479f77-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0c3fb8f-0aab-4e51-bfa0-50e905479f77" (UID: "b0c3fb8f-0aab-4e51-bfa0-50e905479f77"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.043543 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0c3fb8f-0aab-4e51-bfa0-50e905479f77-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.043815 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zb2nv\" (UniqueName: \"kubernetes.io/projected/b0c3fb8f-0aab-4e51-bfa0-50e905479f77-kube-api-access-zb2nv\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.043827 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0c3fb8f-0aab-4e51-bfa0-50e905479f77-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.043841 4789 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b0c3fb8f-0aab-4e51-bfa0-50e905479f77-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.043849 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0c3fb8f-0aab-4e51-bfa0-50e905479f77-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.043858 4789 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b0c3fb8f-0aab-4e51-bfa0-50e905479f77-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.514096 4789 generic.go:334] "Generic (PLEG): container finished" podID="bf547f01-0021-4f93-ae9b-a7afa5016c6a" containerID="ab1c66e1538c230613aada80e9b75be0d893f252c28efd5e97e10f7f2eb347ce" exitCode=0 Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.514195 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-mvgg8" event={"ID":"bf547f01-0021-4f93-ae9b-a7afa5016c6a","Type":"ContainerDied","Data":"ab1c66e1538c230613aada80e9b75be0d893f252c28efd5e97e10f7f2eb347ce"} Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.517832 4789 generic.go:334] "Generic (PLEG): container finished" podID="ad19529b-59a5-42f3-8adf-ba14978e1f8a" containerID="3a5f734314a67825f0218cc23490a22234ef30c531f146bda5b5f972ce330a55" exitCode=0 Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.517906 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-gn9zx" event={"ID":"ad19529b-59a5-42f3-8adf-ba14978e1f8a","Type":"ContainerDied","Data":"3a5f734314a67825f0218cc23490a22234ef30c531f146bda5b5f972ce330a55"} Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.519510 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fwr22" event={"ID":"b0c3fb8f-0aab-4e51-bfa0-50e905479f77","Type":"ContainerDied","Data":"ae1446f8d8db6c29dc2d33f4c7fd63f71169e1b0d098340ea2997831a8d27d55"} Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.519533 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae1446f8d8db6c29dc2d33f4c7fd63f71169e1b0d098340ea2997831a8d27d55" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.519582 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fwr22" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.625686 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-784c4967d9-9h8jd"] Nov 24 11:47:03 crc kubenswrapper[4789]: E1124 11:47:03.626080 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0c3fb8f-0aab-4e51-bfa0-50e905479f77" containerName="keystone-bootstrap" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.626105 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0c3fb8f-0aab-4e51-bfa0-50e905479f77" containerName="keystone-bootstrap" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.626355 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0c3fb8f-0aab-4e51-bfa0-50e905479f77" containerName="keystone-bootstrap" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.626977 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-784c4967d9-9h8jd" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.631367 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.631494 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.632099 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.636561 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.642508 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gpqd2" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.642714 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.653901 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-784c4967d9-9h8jd"] Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.755013 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d23ab493-ddd0-4e41-aa4d-ed9de9256d1c-config-data\") pod \"keystone-784c4967d9-9h8jd\" (UID: \"d23ab493-ddd0-4e41-aa4d-ed9de9256d1c\") " pod="openstack/keystone-784c4967d9-9h8jd" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.755053 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d23ab493-ddd0-4e41-aa4d-ed9de9256d1c-scripts\") pod \"keystone-784c4967d9-9h8jd\" (UID: \"d23ab493-ddd0-4e41-aa4d-ed9de9256d1c\") " pod="openstack/keystone-784c4967d9-9h8jd" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.755072 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d23ab493-ddd0-4e41-aa4d-ed9de9256d1c-internal-tls-certs\") pod \"keystone-784c4967d9-9h8jd\" (UID: \"d23ab493-ddd0-4e41-aa4d-ed9de9256d1c\") " pod="openstack/keystone-784c4967d9-9h8jd" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.755098 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d23ab493-ddd0-4e41-aa4d-ed9de9256d1c-public-tls-certs\") pod \"keystone-784c4967d9-9h8jd\" (UID: \"d23ab493-ddd0-4e41-aa4d-ed9de9256d1c\") " pod="openstack/keystone-784c4967d9-9h8jd" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.755137 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d23ab493-ddd0-4e41-aa4d-ed9de9256d1c-fernet-keys\") pod \"keystone-784c4967d9-9h8jd\" (UID: \"d23ab493-ddd0-4e41-aa4d-ed9de9256d1c\") " pod="openstack/keystone-784c4967d9-9h8jd" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.755182 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d23ab493-ddd0-4e41-aa4d-ed9de9256d1c-combined-ca-bundle\") pod \"keystone-784c4967d9-9h8jd\" (UID: \"d23ab493-ddd0-4e41-aa4d-ed9de9256d1c\") " pod="openstack/keystone-784c4967d9-9h8jd" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.755211 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqcs9\" (UniqueName: \"kubernetes.io/projected/d23ab493-ddd0-4e41-aa4d-ed9de9256d1c-kube-api-access-vqcs9\") pod \"keystone-784c4967d9-9h8jd\" (UID: \"d23ab493-ddd0-4e41-aa4d-ed9de9256d1c\") " pod="openstack/keystone-784c4967d9-9h8jd" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.755234 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d23ab493-ddd0-4e41-aa4d-ed9de9256d1c-credential-keys\") pod \"keystone-784c4967d9-9h8jd\" (UID: \"d23ab493-ddd0-4e41-aa4d-ed9de9256d1c\") " pod="openstack/keystone-784c4967d9-9h8jd" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.856691 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d23ab493-ddd0-4e41-aa4d-ed9de9256d1c-combined-ca-bundle\") pod \"keystone-784c4967d9-9h8jd\" (UID: \"d23ab493-ddd0-4e41-aa4d-ed9de9256d1c\") " pod="openstack/keystone-784c4967d9-9h8jd" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.856746 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqcs9\" (UniqueName: \"kubernetes.io/projected/d23ab493-ddd0-4e41-aa4d-ed9de9256d1c-kube-api-access-vqcs9\") pod \"keystone-784c4967d9-9h8jd\" (UID: \"d23ab493-ddd0-4e41-aa4d-ed9de9256d1c\") " pod="openstack/keystone-784c4967d9-9h8jd" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.856777 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d23ab493-ddd0-4e41-aa4d-ed9de9256d1c-credential-keys\") pod \"keystone-784c4967d9-9h8jd\" (UID: \"d23ab493-ddd0-4e41-aa4d-ed9de9256d1c\") " pod="openstack/keystone-784c4967d9-9h8jd" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.856812 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d23ab493-ddd0-4e41-aa4d-ed9de9256d1c-config-data\") pod \"keystone-784c4967d9-9h8jd\" (UID: \"d23ab493-ddd0-4e41-aa4d-ed9de9256d1c\") " pod="openstack/keystone-784c4967d9-9h8jd" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.856829 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d23ab493-ddd0-4e41-aa4d-ed9de9256d1c-scripts\") pod \"keystone-784c4967d9-9h8jd\" (UID: \"d23ab493-ddd0-4e41-aa4d-ed9de9256d1c\") " pod="openstack/keystone-784c4967d9-9h8jd" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.856851 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d23ab493-ddd0-4e41-aa4d-ed9de9256d1c-internal-tls-certs\") pod \"keystone-784c4967d9-9h8jd\" (UID: \"d23ab493-ddd0-4e41-aa4d-ed9de9256d1c\") " pod="openstack/keystone-784c4967d9-9h8jd" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.856875 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d23ab493-ddd0-4e41-aa4d-ed9de9256d1c-public-tls-certs\") pod \"keystone-784c4967d9-9h8jd\" (UID: \"d23ab493-ddd0-4e41-aa4d-ed9de9256d1c\") " pod="openstack/keystone-784c4967d9-9h8jd" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.856912 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d23ab493-ddd0-4e41-aa4d-ed9de9256d1c-fernet-keys\") pod \"keystone-784c4967d9-9h8jd\" (UID: \"d23ab493-ddd0-4e41-aa4d-ed9de9256d1c\") " pod="openstack/keystone-784c4967d9-9h8jd" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.874585 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d23ab493-ddd0-4e41-aa4d-ed9de9256d1c-public-tls-certs\") pod \"keystone-784c4967d9-9h8jd\" (UID: \"d23ab493-ddd0-4e41-aa4d-ed9de9256d1c\") " pod="openstack/keystone-784c4967d9-9h8jd" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.874974 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d23ab493-ddd0-4e41-aa4d-ed9de9256d1c-fernet-keys\") pod \"keystone-784c4967d9-9h8jd\" (UID: \"d23ab493-ddd0-4e41-aa4d-ed9de9256d1c\") " pod="openstack/keystone-784c4967d9-9h8jd" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.875614 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d23ab493-ddd0-4e41-aa4d-ed9de9256d1c-internal-tls-certs\") pod \"keystone-784c4967d9-9h8jd\" (UID: \"d23ab493-ddd0-4e41-aa4d-ed9de9256d1c\") " pod="openstack/keystone-784c4967d9-9h8jd" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.877278 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d23ab493-ddd0-4e41-aa4d-ed9de9256d1c-config-data\") pod \"keystone-784c4967d9-9h8jd\" (UID: \"d23ab493-ddd0-4e41-aa4d-ed9de9256d1c\") " pod="openstack/keystone-784c4967d9-9h8jd" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.877995 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d23ab493-ddd0-4e41-aa4d-ed9de9256d1c-credential-keys\") pod \"keystone-784c4967d9-9h8jd\" (UID: \"d23ab493-ddd0-4e41-aa4d-ed9de9256d1c\") " pod="openstack/keystone-784c4967d9-9h8jd" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.884068 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d23ab493-ddd0-4e41-aa4d-ed9de9256d1c-scripts\") pod \"keystone-784c4967d9-9h8jd\" (UID: \"d23ab493-ddd0-4e41-aa4d-ed9de9256d1c\") " pod="openstack/keystone-784c4967d9-9h8jd" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.887350 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqcs9\" (UniqueName: \"kubernetes.io/projected/d23ab493-ddd0-4e41-aa4d-ed9de9256d1c-kube-api-access-vqcs9\") pod \"keystone-784c4967d9-9h8jd\" (UID: \"d23ab493-ddd0-4e41-aa4d-ed9de9256d1c\") " pod="openstack/keystone-784c4967d9-9h8jd" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.888749 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d23ab493-ddd0-4e41-aa4d-ed9de9256d1c-combined-ca-bundle\") pod \"keystone-784c4967d9-9h8jd\" (UID: \"d23ab493-ddd0-4e41-aa4d-ed9de9256d1c\") " pod="openstack/keystone-784c4967d9-9h8jd" Nov 24 11:47:03 crc kubenswrapper[4789]: I1124 11:47:03.948357 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-784c4967d9-9h8jd" Nov 24 11:47:04 crc kubenswrapper[4789]: I1124 11:47:04.775564 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f66db59b9-zr4gs" Nov 24 11:47:04 crc kubenswrapper[4789]: I1124 11:47:04.845059 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-6bfk2"] Nov 24 11:47:04 crc kubenswrapper[4789]: I1124 11:47:04.845363 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b6dbdb6f5-6bfk2" podUID="4e2acd55-a485-43c8-b3e5-88083c626aa0" containerName="dnsmasq-dns" containerID="cri-o://ca77ed599ef95c42e6450de71bc3f711651b1528f791d5e1185b080b1195d4a1" gracePeriod=10 Nov 24 11:47:05 crc kubenswrapper[4789]: I1124 11:47:05.537663 4789 generic.go:334] "Generic (PLEG): container finished" podID="4e2acd55-a485-43c8-b3e5-88083c626aa0" containerID="ca77ed599ef95c42e6450de71bc3f711651b1528f791d5e1185b080b1195d4a1" exitCode=0 Nov 24 11:47:05 crc kubenswrapper[4789]: I1124 11:47:05.537743 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-6bfk2" event={"ID":"4e2acd55-a485-43c8-b3e5-88083c626aa0","Type":"ContainerDied","Data":"ca77ed599ef95c42e6450de71bc3f711651b1528f791d5e1185b080b1195d4a1"} Nov 24 11:47:07 crc kubenswrapper[4789]: I1124 11:47:07.560316 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-mvgg8" event={"ID":"bf547f01-0021-4f93-ae9b-a7afa5016c6a","Type":"ContainerDied","Data":"0be149f3213f1fffa7a28c8587c91247d364ff994cff6b37eb561cff9a625da5"} Nov 24 11:47:07 crc kubenswrapper[4789]: I1124 11:47:07.560546 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0be149f3213f1fffa7a28c8587c91247d364ff994cff6b37eb561cff9a625da5" Nov 24 11:47:07 crc kubenswrapper[4789]: I1124 11:47:07.563363 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-gn9zx" event={"ID":"ad19529b-59a5-42f3-8adf-ba14978e1f8a","Type":"ContainerDied","Data":"0b88730f5ef4ea56b3035d54604954c51a9e153c5bca9a110448bfbd0ab84ade"} Nov 24 11:47:07 crc kubenswrapper[4789]: I1124 11:47:07.563468 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b88730f5ef4ea56b3035d54604954c51a9e153c5bca9a110448bfbd0ab84ade" Nov 24 11:47:07 crc kubenswrapper[4789]: I1124 11:47:07.666439 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-mvgg8" Nov 24 11:47:07 crc kubenswrapper[4789]: I1124 11:47:07.693595 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-gn9zx" Nov 24 11:47:07 crc kubenswrapper[4789]: I1124 11:47:07.752000 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bf547f01-0021-4f93-ae9b-a7afa5016c6a-db-sync-config-data\") pod \"bf547f01-0021-4f93-ae9b-a7afa5016c6a\" (UID: \"bf547f01-0021-4f93-ae9b-a7afa5016c6a\") " Nov 24 11:47:07 crc kubenswrapper[4789]: I1124 11:47:07.752050 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxmvj\" (UniqueName: \"kubernetes.io/projected/bf547f01-0021-4f93-ae9b-a7afa5016c6a-kube-api-access-jxmvj\") pod \"bf547f01-0021-4f93-ae9b-a7afa5016c6a\" (UID: \"bf547f01-0021-4f93-ae9b-a7afa5016c6a\") " Nov 24 11:47:07 crc kubenswrapper[4789]: I1124 11:47:07.752070 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf547f01-0021-4f93-ae9b-a7afa5016c6a-combined-ca-bundle\") pod \"bf547f01-0021-4f93-ae9b-a7afa5016c6a\" (UID: \"bf547f01-0021-4f93-ae9b-a7afa5016c6a\") " Nov 24 11:47:07 crc kubenswrapper[4789]: I1124 11:47:07.760607 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf547f01-0021-4f93-ae9b-a7afa5016c6a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "bf547f01-0021-4f93-ae9b-a7afa5016c6a" (UID: "bf547f01-0021-4f93-ae9b-a7afa5016c6a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:07 crc kubenswrapper[4789]: I1124 11:47:07.786616 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf547f01-0021-4f93-ae9b-a7afa5016c6a-kube-api-access-jxmvj" (OuterVolumeSpecName: "kube-api-access-jxmvj") pod "bf547f01-0021-4f93-ae9b-a7afa5016c6a" (UID: "bf547f01-0021-4f93-ae9b-a7afa5016c6a"). InnerVolumeSpecName "kube-api-access-jxmvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:47:07 crc kubenswrapper[4789]: I1124 11:47:07.809989 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf547f01-0021-4f93-ae9b-a7afa5016c6a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf547f01-0021-4f93-ae9b-a7afa5016c6a" (UID: "bf547f01-0021-4f93-ae9b-a7afa5016c6a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:07 crc kubenswrapper[4789]: I1124 11:47:07.855523 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcsxw\" (UniqueName: \"kubernetes.io/projected/ad19529b-59a5-42f3-8adf-ba14978e1f8a-kube-api-access-jcsxw\") pod \"ad19529b-59a5-42f3-8adf-ba14978e1f8a\" (UID: \"ad19529b-59a5-42f3-8adf-ba14978e1f8a\") " Nov 24 11:47:07 crc kubenswrapper[4789]: I1124 11:47:07.855628 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad19529b-59a5-42f3-8adf-ba14978e1f8a-logs\") pod \"ad19529b-59a5-42f3-8adf-ba14978e1f8a\" (UID: \"ad19529b-59a5-42f3-8adf-ba14978e1f8a\") " Nov 24 11:47:07 crc kubenswrapper[4789]: I1124 11:47:07.855685 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad19529b-59a5-42f3-8adf-ba14978e1f8a-combined-ca-bundle\") pod \"ad19529b-59a5-42f3-8adf-ba14978e1f8a\" (UID: \"ad19529b-59a5-42f3-8adf-ba14978e1f8a\") " Nov 24 11:47:07 crc kubenswrapper[4789]: I1124 11:47:07.855744 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad19529b-59a5-42f3-8adf-ba14978e1f8a-config-data\") pod \"ad19529b-59a5-42f3-8adf-ba14978e1f8a\" (UID: \"ad19529b-59a5-42f3-8adf-ba14978e1f8a\") " Nov 24 11:47:07 crc kubenswrapper[4789]: I1124 11:47:07.855785 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad19529b-59a5-42f3-8adf-ba14978e1f8a-scripts\") pod \"ad19529b-59a5-42f3-8adf-ba14978e1f8a\" (UID: \"ad19529b-59a5-42f3-8adf-ba14978e1f8a\") " Nov 24 11:47:07 crc kubenswrapper[4789]: I1124 11:47:07.856068 4789 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bf547f01-0021-4f93-ae9b-a7afa5016c6a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:07 crc kubenswrapper[4789]: I1124 11:47:07.856080 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxmvj\" (UniqueName: \"kubernetes.io/projected/bf547f01-0021-4f93-ae9b-a7afa5016c6a-kube-api-access-jxmvj\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:07 crc kubenswrapper[4789]: I1124 11:47:07.856089 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf547f01-0021-4f93-ae9b-a7afa5016c6a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:07 crc kubenswrapper[4789]: I1124 11:47:07.858443 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad19529b-59a5-42f3-8adf-ba14978e1f8a-logs" (OuterVolumeSpecName: "logs") pod "ad19529b-59a5-42f3-8adf-ba14978e1f8a" (UID: "ad19529b-59a5-42f3-8adf-ba14978e1f8a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:47:07 crc kubenswrapper[4789]: I1124 11:47:07.858863 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad19529b-59a5-42f3-8adf-ba14978e1f8a-scripts" (OuterVolumeSpecName: "scripts") pod "ad19529b-59a5-42f3-8adf-ba14978e1f8a" (UID: "ad19529b-59a5-42f3-8adf-ba14978e1f8a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:07 crc kubenswrapper[4789]: I1124 11:47:07.872568 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad19529b-59a5-42f3-8adf-ba14978e1f8a-kube-api-access-jcsxw" (OuterVolumeSpecName: "kube-api-access-jcsxw") pod "ad19529b-59a5-42f3-8adf-ba14978e1f8a" (UID: "ad19529b-59a5-42f3-8adf-ba14978e1f8a"). InnerVolumeSpecName "kube-api-access-jcsxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:47:07 crc kubenswrapper[4789]: I1124 11:47:07.893911 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad19529b-59a5-42f3-8adf-ba14978e1f8a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad19529b-59a5-42f3-8adf-ba14978e1f8a" (UID: "ad19529b-59a5-42f3-8adf-ba14978e1f8a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:07 crc kubenswrapper[4789]: I1124 11:47:07.927398 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad19529b-59a5-42f3-8adf-ba14978e1f8a-config-data" (OuterVolumeSpecName: "config-data") pod "ad19529b-59a5-42f3-8adf-ba14978e1f8a" (UID: "ad19529b-59a5-42f3-8adf-ba14978e1f8a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:07 crc kubenswrapper[4789]: I1124 11:47:07.958359 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad19529b-59a5-42f3-8adf-ba14978e1f8a-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:07 crc kubenswrapper[4789]: I1124 11:47:07.958720 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcsxw\" (UniqueName: \"kubernetes.io/projected/ad19529b-59a5-42f3-8adf-ba14978e1f8a-kube-api-access-jcsxw\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:07 crc kubenswrapper[4789]: I1124 11:47:07.958772 4789 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad19529b-59a5-42f3-8adf-ba14978e1f8a-logs\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:07 crc kubenswrapper[4789]: I1124 11:47:07.958818 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad19529b-59a5-42f3-8adf-ba14978e1f8a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:07 crc kubenswrapper[4789]: I1124 11:47:07.958864 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad19529b-59a5-42f3-8adf-ba14978e1f8a-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.026257 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-6bfk2" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.051144 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-784c4967d9-9h8jd"] Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.160537 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8gsp\" (UniqueName: \"kubernetes.io/projected/4e2acd55-a485-43c8-b3e5-88083c626aa0-kube-api-access-g8gsp\") pod \"4e2acd55-a485-43c8-b3e5-88083c626aa0\" (UID: \"4e2acd55-a485-43c8-b3e5-88083c626aa0\") " Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.160634 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e2acd55-a485-43c8-b3e5-88083c626aa0-ovsdbserver-sb\") pod \"4e2acd55-a485-43c8-b3e5-88083c626aa0\" (UID: \"4e2acd55-a485-43c8-b3e5-88083c626aa0\") " Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.160784 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e2acd55-a485-43c8-b3e5-88083c626aa0-ovsdbserver-nb\") pod \"4e2acd55-a485-43c8-b3e5-88083c626aa0\" (UID: \"4e2acd55-a485-43c8-b3e5-88083c626aa0\") " Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.160826 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e2acd55-a485-43c8-b3e5-88083c626aa0-dns-svc\") pod \"4e2acd55-a485-43c8-b3e5-88083c626aa0\" (UID: \"4e2acd55-a485-43c8-b3e5-88083c626aa0\") " Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.160884 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e2acd55-a485-43c8-b3e5-88083c626aa0-config\") pod \"4e2acd55-a485-43c8-b3e5-88083c626aa0\" (UID: \"4e2acd55-a485-43c8-b3e5-88083c626aa0\") " Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.167170 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e2acd55-a485-43c8-b3e5-88083c626aa0-kube-api-access-g8gsp" (OuterVolumeSpecName: "kube-api-access-g8gsp") pod "4e2acd55-a485-43c8-b3e5-88083c626aa0" (UID: "4e2acd55-a485-43c8-b3e5-88083c626aa0"). InnerVolumeSpecName "kube-api-access-g8gsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.236271 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e2acd55-a485-43c8-b3e5-88083c626aa0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4e2acd55-a485-43c8-b3e5-88083c626aa0" (UID: "4e2acd55-a485-43c8-b3e5-88083c626aa0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.242912 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e2acd55-a485-43c8-b3e5-88083c626aa0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4e2acd55-a485-43c8-b3e5-88083c626aa0" (UID: "4e2acd55-a485-43c8-b3e5-88083c626aa0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.253899 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e2acd55-a485-43c8-b3e5-88083c626aa0-config" (OuterVolumeSpecName: "config") pod "4e2acd55-a485-43c8-b3e5-88083c626aa0" (UID: "4e2acd55-a485-43c8-b3e5-88083c626aa0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.263976 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e2acd55-a485-43c8-b3e5-88083c626aa0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.264003 4789 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e2acd55-a485-43c8-b3e5-88083c626aa0-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.264012 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e2acd55-a485-43c8-b3e5-88083c626aa0-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.264021 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8gsp\" (UniqueName: \"kubernetes.io/projected/4e2acd55-a485-43c8-b3e5-88083c626aa0-kube-api-access-g8gsp\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.268579 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e2acd55-a485-43c8-b3e5-88083c626aa0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4e2acd55-a485-43c8-b3e5-88083c626aa0" (UID: "4e2acd55-a485-43c8-b3e5-88083c626aa0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.366303 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e2acd55-a485-43c8-b3e5-88083c626aa0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.573843 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-msb22" event={"ID":"2e41ad3b-8d25-49db-8c15-4a3a57f47e2f","Type":"ContainerStarted","Data":"fc9dbd6cb35e285eb0ce34d2a43fff15cbd426d38903163e2a96ce8b8b9c011c"} Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.577258 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-6bfk2" event={"ID":"4e2acd55-a485-43c8-b3e5-88083c626aa0","Type":"ContainerDied","Data":"cf1dc713208fead4773d998722d7e9775ee0dba882e62b8974f2f85a928669ba"} Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.577291 4789 scope.go:117] "RemoveContainer" containerID="ca77ed599ef95c42e6450de71bc3f711651b1528f791d5e1185b080b1195d4a1" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.577400 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-6bfk2" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.584831 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c87d408-bf3b-4156-9116-110b948e3ead","Type":"ContainerStarted","Data":"74891286f6737c133baab385e02e00f72c9d3c624539dd2d91a513bb98367053"} Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.586120 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-gn9zx" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.587159 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-784c4967d9-9h8jd" event={"ID":"d23ab493-ddd0-4e41-aa4d-ed9de9256d1c","Type":"ContainerStarted","Data":"35515e5609ddb8a8ea80ea535c3250dfe026e35301188a6c97035a4922c506d0"} Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.587188 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-784c4967d9-9h8jd" event={"ID":"d23ab493-ddd0-4e41-aa4d-ed9de9256d1c","Type":"ContainerStarted","Data":"1030a4f5943b3410727fac1eb7d799856709172273a8b7ef6926c9762f63864c"} Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.587205 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-784c4967d9-9h8jd" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.587245 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-mvgg8" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.613489 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-msb22" podStartSLOduration=2.62597823 podStartE2EDuration="36.613451397s" podCreationTimestamp="2025-11-24 11:46:32 +0000 UTC" firstStartedPulling="2025-11-24 11:46:33.853311973 +0000 UTC m=+976.435783352" lastFinishedPulling="2025-11-24 11:47:07.84078514 +0000 UTC m=+1010.423256519" observedRunningTime="2025-11-24 11:47:08.608792064 +0000 UTC m=+1011.191263453" watchObservedRunningTime="2025-11-24 11:47:08.613451397 +0000 UTC m=+1011.195922776" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.632665 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-784c4967d9-9h8jd" podStartSLOduration=5.632645562 podStartE2EDuration="5.632645562s" podCreationTimestamp="2025-11-24 11:47:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:47:08.628138783 +0000 UTC m=+1011.210610172" watchObservedRunningTime="2025-11-24 11:47:08.632645562 +0000 UTC m=+1011.215116941" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.637803 4789 scope.go:117] "RemoveContainer" containerID="d87e7363d763cc8f6d5f4402c241d476613fb35afd2ee3b2a17771dc18d5289d" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.663663 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-6bfk2"] Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.670245 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-6bfk2"] Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.854768 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6777ddb46-lfh4x"] Nov 24 11:47:08 crc kubenswrapper[4789]: E1124 11:47:08.855539 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e2acd55-a485-43c8-b3e5-88083c626aa0" containerName="init" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.855599 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e2acd55-a485-43c8-b3e5-88083c626aa0" containerName="init" Nov 24 11:47:08 crc kubenswrapper[4789]: E1124 11:47:08.855659 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad19529b-59a5-42f3-8adf-ba14978e1f8a" containerName="placement-db-sync" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.855705 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad19529b-59a5-42f3-8adf-ba14978e1f8a" containerName="placement-db-sync" Nov 24 11:47:08 crc kubenswrapper[4789]: E1124 11:47:08.855805 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf547f01-0021-4f93-ae9b-a7afa5016c6a" containerName="barbican-db-sync" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.855858 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf547f01-0021-4f93-ae9b-a7afa5016c6a" containerName="barbican-db-sync" Nov 24 11:47:08 crc kubenswrapper[4789]: E1124 11:47:08.855914 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e2acd55-a485-43c8-b3e5-88083c626aa0" containerName="dnsmasq-dns" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.855961 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e2acd55-a485-43c8-b3e5-88083c626aa0" containerName="dnsmasq-dns" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.856154 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf547f01-0021-4f93-ae9b-a7afa5016c6a" containerName="barbican-db-sync" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.856218 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e2acd55-a485-43c8-b3e5-88083c626aa0" containerName="dnsmasq-dns" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.856374 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad19529b-59a5-42f3-8adf-ba14978e1f8a" containerName="placement-db-sync" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.857359 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6777ddb46-lfh4x" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.869142 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.869434 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-zqvs2" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.873594 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.882962 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7c6b6fc77f-wrz6s"] Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.888858 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7c6b6fc77f-wrz6s" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.897290 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.903805 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7c6b6fc77f-wrz6s"] Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.926425 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6777ddb46-lfh4x"] Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.980420 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jffn\" (UniqueName: \"kubernetes.io/projected/e6858fb3-9f7e-4855-abd4-23fdc894d153-kube-api-access-9jffn\") pod \"barbican-keystone-listener-6777ddb46-lfh4x\" (UID: \"e6858fb3-9f7e-4855-abd4-23fdc894d153\") " pod="openstack/barbican-keystone-listener-6777ddb46-lfh4x" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.980648 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a3e8f3b-bcd4-4911-b365-e02bad3e8611-combined-ca-bundle\") pod \"barbican-worker-7c6b6fc77f-wrz6s\" (UID: \"6a3e8f3b-bcd4-4911-b365-e02bad3e8611\") " pod="openstack/barbican-worker-7c6b6fc77f-wrz6s" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.980680 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qndvl\" (UniqueName: \"kubernetes.io/projected/6a3e8f3b-bcd4-4911-b365-e02bad3e8611-kube-api-access-qndvl\") pod \"barbican-worker-7c6b6fc77f-wrz6s\" (UID: \"6a3e8f3b-bcd4-4911-b365-e02bad3e8611\") " pod="openstack/barbican-worker-7c6b6fc77f-wrz6s" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.980699 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6858fb3-9f7e-4855-abd4-23fdc894d153-logs\") pod \"barbican-keystone-listener-6777ddb46-lfh4x\" (UID: \"e6858fb3-9f7e-4855-abd4-23fdc894d153\") " pod="openstack/barbican-keystone-listener-6777ddb46-lfh4x" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.980743 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6858fb3-9f7e-4855-abd4-23fdc894d153-combined-ca-bundle\") pod \"barbican-keystone-listener-6777ddb46-lfh4x\" (UID: \"e6858fb3-9f7e-4855-abd4-23fdc894d153\") " pod="openstack/barbican-keystone-listener-6777ddb46-lfh4x" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.980766 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6858fb3-9f7e-4855-abd4-23fdc894d153-config-data\") pod \"barbican-keystone-listener-6777ddb46-lfh4x\" (UID: \"e6858fb3-9f7e-4855-abd4-23fdc894d153\") " pod="openstack/barbican-keystone-listener-6777ddb46-lfh4x" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.980796 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6858fb3-9f7e-4855-abd4-23fdc894d153-config-data-custom\") pod \"barbican-keystone-listener-6777ddb46-lfh4x\" (UID: \"e6858fb3-9f7e-4855-abd4-23fdc894d153\") " pod="openstack/barbican-keystone-listener-6777ddb46-lfh4x" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.980834 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a3e8f3b-bcd4-4911-b365-e02bad3e8611-config-data\") pod \"barbican-worker-7c6b6fc77f-wrz6s\" (UID: \"6a3e8f3b-bcd4-4911-b365-e02bad3e8611\") " pod="openstack/barbican-worker-7c6b6fc77f-wrz6s" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.980867 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6a3e8f3b-bcd4-4911-b365-e02bad3e8611-config-data-custom\") pod \"barbican-worker-7c6b6fc77f-wrz6s\" (UID: \"6a3e8f3b-bcd4-4911-b365-e02bad3e8611\") " pod="openstack/barbican-worker-7c6b6fc77f-wrz6s" Nov 24 11:47:08 crc kubenswrapper[4789]: I1124 11:47:08.980887 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a3e8f3b-bcd4-4911-b365-e02bad3e8611-logs\") pod \"barbican-worker-7c6b6fc77f-wrz6s\" (UID: \"6a3e8f3b-bcd4-4911-b365-e02bad3e8611\") " pod="openstack/barbican-worker-7c6b6fc77f-wrz6s" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.021858 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-869f779d85-58nhn"] Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.023112 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-58nhn" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.074995 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-546dc675b-x2vpf"] Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.084348 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-546dc675b-x2vpf" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.103191 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jffn\" (UniqueName: \"kubernetes.io/projected/e6858fb3-9f7e-4855-abd4-23fdc894d153-kube-api-access-9jffn\") pod \"barbican-keystone-listener-6777ddb46-lfh4x\" (UID: \"e6858fb3-9f7e-4855-abd4-23fdc894d153\") " pod="openstack/barbican-keystone-listener-6777ddb46-lfh4x" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.103276 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a3e8f3b-bcd4-4911-b365-e02bad3e8611-combined-ca-bundle\") pod \"barbican-worker-7c6b6fc77f-wrz6s\" (UID: \"6a3e8f3b-bcd4-4911-b365-e02bad3e8611\") " pod="openstack/barbican-worker-7c6b6fc77f-wrz6s" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.103301 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qndvl\" (UniqueName: \"kubernetes.io/projected/6a3e8f3b-bcd4-4911-b365-e02bad3e8611-kube-api-access-qndvl\") pod \"barbican-worker-7c6b6fc77f-wrz6s\" (UID: \"6a3e8f3b-bcd4-4911-b365-e02bad3e8611\") " pod="openstack/barbican-worker-7c6b6fc77f-wrz6s" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.103321 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6858fb3-9f7e-4855-abd4-23fdc894d153-logs\") pod \"barbican-keystone-listener-6777ddb46-lfh4x\" (UID: \"e6858fb3-9f7e-4855-abd4-23fdc894d153\") " pod="openstack/barbican-keystone-listener-6777ddb46-lfh4x" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.103352 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6858fb3-9f7e-4855-abd4-23fdc894d153-combined-ca-bundle\") pod \"barbican-keystone-listener-6777ddb46-lfh4x\" (UID: \"e6858fb3-9f7e-4855-abd4-23fdc894d153\") " pod="openstack/barbican-keystone-listener-6777ddb46-lfh4x" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.103373 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6858fb3-9f7e-4855-abd4-23fdc894d153-config-data\") pod \"barbican-keystone-listener-6777ddb46-lfh4x\" (UID: \"e6858fb3-9f7e-4855-abd4-23fdc894d153\") " pod="openstack/barbican-keystone-listener-6777ddb46-lfh4x" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.103398 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6858fb3-9f7e-4855-abd4-23fdc894d153-config-data-custom\") pod \"barbican-keystone-listener-6777ddb46-lfh4x\" (UID: \"e6858fb3-9f7e-4855-abd4-23fdc894d153\") " pod="openstack/barbican-keystone-listener-6777ddb46-lfh4x" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.103425 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a3e8f3b-bcd4-4911-b365-e02bad3e8611-config-data\") pod \"barbican-worker-7c6b6fc77f-wrz6s\" (UID: \"6a3e8f3b-bcd4-4911-b365-e02bad3e8611\") " pod="openstack/barbican-worker-7c6b6fc77f-wrz6s" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.106138 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6858fb3-9f7e-4855-abd4-23fdc894d153-logs\") pod \"barbican-keystone-listener-6777ddb46-lfh4x\" (UID: \"e6858fb3-9f7e-4855-abd4-23fdc894d153\") " pod="openstack/barbican-keystone-listener-6777ddb46-lfh4x" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.116825 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6a3e8f3b-bcd4-4911-b365-e02bad3e8611-config-data-custom\") pod \"barbican-worker-7c6b6fc77f-wrz6s\" (UID: \"6a3e8f3b-bcd4-4911-b365-e02bad3e8611\") " pod="openstack/barbican-worker-7c6b6fc77f-wrz6s" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.116880 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a3e8f3b-bcd4-4911-b365-e02bad3e8611-logs\") pod \"barbican-worker-7c6b6fc77f-wrz6s\" (UID: \"6a3e8f3b-bcd4-4911-b365-e02bad3e8611\") " pod="openstack/barbican-worker-7c6b6fc77f-wrz6s" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.121662 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a3e8f3b-bcd4-4911-b365-e02bad3e8611-logs\") pod \"barbican-worker-7c6b6fc77f-wrz6s\" (UID: \"6a3e8f3b-bcd4-4911-b365-e02bad3e8611\") " pod="openstack/barbican-worker-7c6b6fc77f-wrz6s" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.125447 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-58nhn"] Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.128175 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a3e8f3b-bcd4-4911-b365-e02bad3e8611-config-data\") pod \"barbican-worker-7c6b6fc77f-wrz6s\" (UID: \"6a3e8f3b-bcd4-4911-b365-e02bad3e8611\") " pod="openstack/barbican-worker-7c6b6fc77f-wrz6s" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.200300 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6858fb3-9f7e-4855-abd4-23fdc894d153-config-data\") pod \"barbican-keystone-listener-6777ddb46-lfh4x\" (UID: \"e6858fb3-9f7e-4855-abd4-23fdc894d153\") " pod="openstack/barbican-keystone-listener-6777ddb46-lfh4x" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.200799 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6a3e8f3b-bcd4-4911-b365-e02bad3e8611-config-data-custom\") pod \"barbican-worker-7c6b6fc77f-wrz6s\" (UID: \"6a3e8f3b-bcd4-4911-b365-e02bad3e8611\") " pod="openstack/barbican-worker-7c6b6fc77f-wrz6s" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.201219 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6858fb3-9f7e-4855-abd4-23fdc894d153-combined-ca-bundle\") pod \"barbican-keystone-listener-6777ddb46-lfh4x\" (UID: \"e6858fb3-9f7e-4855-abd4-23fdc894d153\") " pod="openstack/barbican-keystone-listener-6777ddb46-lfh4x" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.206081 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-546dc675b-x2vpf"] Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.207562 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.207709 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.207812 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.207906 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-w75rj" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.207999 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.222085 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qndvl\" (UniqueName: \"kubernetes.io/projected/6a3e8f3b-bcd4-4911-b365-e02bad3e8611-kube-api-access-qndvl\") pod \"barbican-worker-7c6b6fc77f-wrz6s\" (UID: \"6a3e8f3b-bcd4-4911-b365-e02bad3e8611\") " pod="openstack/barbican-worker-7c6b6fc77f-wrz6s" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.224043 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a3e8f3b-bcd4-4911-b365-e02bad3e8611-combined-ca-bundle\") pod \"barbican-worker-7c6b6fc77f-wrz6s\" (UID: \"6a3e8f3b-bcd4-4911-b365-e02bad3e8611\") " pod="openstack/barbican-worker-7c6b6fc77f-wrz6s" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.226346 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6858fb3-9f7e-4855-abd4-23fdc894d153-config-data-custom\") pod \"barbican-keystone-listener-6777ddb46-lfh4x\" (UID: \"e6858fb3-9f7e-4855-abd4-23fdc894d153\") " pod="openstack/barbican-keystone-listener-6777ddb46-lfh4x" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.227127 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jffn\" (UniqueName: \"kubernetes.io/projected/e6858fb3-9f7e-4855-abd4-23fdc894d153-kube-api-access-9jffn\") pod \"barbican-keystone-listener-6777ddb46-lfh4x\" (UID: \"e6858fb3-9f7e-4855-abd4-23fdc894d153\") " pod="openstack/barbican-keystone-listener-6777ddb46-lfh4x" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.228495 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6777ddb46-lfh4x" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.243062 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a-dns-svc\") pod \"dnsmasq-dns-869f779d85-58nhn\" (UID: \"de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a\") " pod="openstack/dnsmasq-dns-869f779d85-58nhn" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.243167 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27b80dec-87d3-4357-a667-60524f89de21-config-data\") pod \"placement-546dc675b-x2vpf\" (UID: \"27b80dec-87d3-4357-a667-60524f89de21\") " pod="openstack/placement-546dc675b-x2vpf" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.243199 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27b80dec-87d3-4357-a667-60524f89de21-logs\") pod \"placement-546dc675b-x2vpf\" (UID: \"27b80dec-87d3-4357-a667-60524f89de21\") " pod="openstack/placement-546dc675b-x2vpf" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.243215 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a-config\") pod \"dnsmasq-dns-869f779d85-58nhn\" (UID: \"de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a\") " pod="openstack/dnsmasq-dns-869f779d85-58nhn" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.243233 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/27b80dec-87d3-4357-a667-60524f89de21-public-tls-certs\") pod \"placement-546dc675b-x2vpf\" (UID: \"27b80dec-87d3-4357-a667-60524f89de21\") " pod="openstack/placement-546dc675b-x2vpf" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.243261 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lnhw\" (UniqueName: \"kubernetes.io/projected/de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a-kube-api-access-7lnhw\") pod \"dnsmasq-dns-869f779d85-58nhn\" (UID: \"de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a\") " pod="openstack/dnsmasq-dns-869f779d85-58nhn" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.243304 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a-ovsdbserver-sb\") pod \"dnsmasq-dns-869f779d85-58nhn\" (UID: \"de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a\") " pod="openstack/dnsmasq-dns-869f779d85-58nhn" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.243327 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m48gj\" (UniqueName: \"kubernetes.io/projected/27b80dec-87d3-4357-a667-60524f89de21-kube-api-access-m48gj\") pod \"placement-546dc675b-x2vpf\" (UID: \"27b80dec-87d3-4357-a667-60524f89de21\") " pod="openstack/placement-546dc675b-x2vpf" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.243347 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27b80dec-87d3-4357-a667-60524f89de21-scripts\") pod \"placement-546dc675b-x2vpf\" (UID: \"27b80dec-87d3-4357-a667-60524f89de21\") " pod="openstack/placement-546dc675b-x2vpf" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.243364 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a-ovsdbserver-nb\") pod \"dnsmasq-dns-869f779d85-58nhn\" (UID: \"de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a\") " pod="openstack/dnsmasq-dns-869f779d85-58nhn" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.243406 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27b80dec-87d3-4357-a667-60524f89de21-combined-ca-bundle\") pod \"placement-546dc675b-x2vpf\" (UID: \"27b80dec-87d3-4357-a667-60524f89de21\") " pod="openstack/placement-546dc675b-x2vpf" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.243535 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/27b80dec-87d3-4357-a667-60524f89de21-internal-tls-certs\") pod \"placement-546dc675b-x2vpf\" (UID: \"27b80dec-87d3-4357-a667-60524f89de21\") " pod="openstack/placement-546dc675b-x2vpf" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.346344 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27b80dec-87d3-4357-a667-60524f89de21-combined-ca-bundle\") pod \"placement-546dc675b-x2vpf\" (UID: \"27b80dec-87d3-4357-a667-60524f89de21\") " pod="openstack/placement-546dc675b-x2vpf" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.346444 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/27b80dec-87d3-4357-a667-60524f89de21-internal-tls-certs\") pod \"placement-546dc675b-x2vpf\" (UID: \"27b80dec-87d3-4357-a667-60524f89de21\") " pod="openstack/placement-546dc675b-x2vpf" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.346531 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a-dns-svc\") pod \"dnsmasq-dns-869f779d85-58nhn\" (UID: \"de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a\") " pod="openstack/dnsmasq-dns-869f779d85-58nhn" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.346569 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27b80dec-87d3-4357-a667-60524f89de21-config-data\") pod \"placement-546dc675b-x2vpf\" (UID: \"27b80dec-87d3-4357-a667-60524f89de21\") " pod="openstack/placement-546dc675b-x2vpf" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.346590 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27b80dec-87d3-4357-a667-60524f89de21-logs\") pod \"placement-546dc675b-x2vpf\" (UID: \"27b80dec-87d3-4357-a667-60524f89de21\") " pod="openstack/placement-546dc675b-x2vpf" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.346607 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a-config\") pod \"dnsmasq-dns-869f779d85-58nhn\" (UID: \"de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a\") " pod="openstack/dnsmasq-dns-869f779d85-58nhn" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.346627 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/27b80dec-87d3-4357-a667-60524f89de21-public-tls-certs\") pod \"placement-546dc675b-x2vpf\" (UID: \"27b80dec-87d3-4357-a667-60524f89de21\") " pod="openstack/placement-546dc675b-x2vpf" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.346649 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lnhw\" (UniqueName: \"kubernetes.io/projected/de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a-kube-api-access-7lnhw\") pod \"dnsmasq-dns-869f779d85-58nhn\" (UID: \"de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a\") " pod="openstack/dnsmasq-dns-869f779d85-58nhn" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.346697 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a-ovsdbserver-sb\") pod \"dnsmasq-dns-869f779d85-58nhn\" (UID: \"de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a\") " pod="openstack/dnsmasq-dns-869f779d85-58nhn" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.346715 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m48gj\" (UniqueName: \"kubernetes.io/projected/27b80dec-87d3-4357-a667-60524f89de21-kube-api-access-m48gj\") pod \"placement-546dc675b-x2vpf\" (UID: \"27b80dec-87d3-4357-a667-60524f89de21\") " pod="openstack/placement-546dc675b-x2vpf" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.346731 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27b80dec-87d3-4357-a667-60524f89de21-scripts\") pod \"placement-546dc675b-x2vpf\" (UID: \"27b80dec-87d3-4357-a667-60524f89de21\") " pod="openstack/placement-546dc675b-x2vpf" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.346748 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a-ovsdbserver-nb\") pod \"dnsmasq-dns-869f779d85-58nhn\" (UID: \"de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a\") " pod="openstack/dnsmasq-dns-869f779d85-58nhn" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.347893 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27b80dec-87d3-4357-a667-60524f89de21-logs\") pod \"placement-546dc675b-x2vpf\" (UID: \"27b80dec-87d3-4357-a667-60524f89de21\") " pod="openstack/placement-546dc675b-x2vpf" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.355478 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a-config\") pod \"dnsmasq-dns-869f779d85-58nhn\" (UID: \"de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a\") " pod="openstack/dnsmasq-dns-869f779d85-58nhn" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.357084 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a-ovsdbserver-nb\") pod \"dnsmasq-dns-869f779d85-58nhn\" (UID: \"de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a\") " pod="openstack/dnsmasq-dns-869f779d85-58nhn" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.358246 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a-dns-svc\") pod \"dnsmasq-dns-869f779d85-58nhn\" (UID: \"de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a\") " pod="openstack/dnsmasq-dns-869f779d85-58nhn" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.358880 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a-ovsdbserver-sb\") pod \"dnsmasq-dns-869f779d85-58nhn\" (UID: \"de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a\") " pod="openstack/dnsmasq-dns-869f779d85-58nhn" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.361775 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/27b80dec-87d3-4357-a667-60524f89de21-internal-tls-certs\") pod \"placement-546dc675b-x2vpf\" (UID: \"27b80dec-87d3-4357-a667-60524f89de21\") " pod="openstack/placement-546dc675b-x2vpf" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.366269 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27b80dec-87d3-4357-a667-60524f89de21-config-data\") pod \"placement-546dc675b-x2vpf\" (UID: \"27b80dec-87d3-4357-a667-60524f89de21\") " pod="openstack/placement-546dc675b-x2vpf" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.366578 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27b80dec-87d3-4357-a667-60524f89de21-scripts\") pod \"placement-546dc675b-x2vpf\" (UID: \"27b80dec-87d3-4357-a667-60524f89de21\") " pod="openstack/placement-546dc675b-x2vpf" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.367081 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27b80dec-87d3-4357-a667-60524f89de21-combined-ca-bundle\") pod \"placement-546dc675b-x2vpf\" (UID: \"27b80dec-87d3-4357-a667-60524f89de21\") " pod="openstack/placement-546dc675b-x2vpf" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.375976 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/27b80dec-87d3-4357-a667-60524f89de21-public-tls-certs\") pod \"placement-546dc675b-x2vpf\" (UID: \"27b80dec-87d3-4357-a667-60524f89de21\") " pod="openstack/placement-546dc675b-x2vpf" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.380818 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lnhw\" (UniqueName: \"kubernetes.io/projected/de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a-kube-api-access-7lnhw\") pod \"dnsmasq-dns-869f779d85-58nhn\" (UID: \"de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a\") " pod="openstack/dnsmasq-dns-869f779d85-58nhn" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.383695 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m48gj\" (UniqueName: \"kubernetes.io/projected/27b80dec-87d3-4357-a667-60524f89de21-kube-api-access-m48gj\") pod \"placement-546dc675b-x2vpf\" (UID: \"27b80dec-87d3-4357-a667-60524f89de21\") " pod="openstack/placement-546dc675b-x2vpf" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.483422 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7fb85f479d-hgd4m"] Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.485121 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7fb85f479d-hgd4m" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.496865 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7fb85f479d-hgd4m"] Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.514320 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7c6b6fc77f-wrz6s" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.518632 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.553233 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60b78f2d-a541-467f-88f5-daeffe5c9938-logs\") pod \"barbican-api-7fb85f479d-hgd4m\" (UID: \"60b78f2d-a541-467f-88f5-daeffe5c9938\") " pod="openstack/barbican-api-7fb85f479d-hgd4m" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.553286 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/60b78f2d-a541-467f-88f5-daeffe5c9938-config-data-custom\") pod \"barbican-api-7fb85f479d-hgd4m\" (UID: \"60b78f2d-a541-467f-88f5-daeffe5c9938\") " pod="openstack/barbican-api-7fb85f479d-hgd4m" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.553366 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60b78f2d-a541-467f-88f5-daeffe5c9938-combined-ca-bundle\") pod \"barbican-api-7fb85f479d-hgd4m\" (UID: \"60b78f2d-a541-467f-88f5-daeffe5c9938\") " pod="openstack/barbican-api-7fb85f479d-hgd4m" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.553406 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60b78f2d-a541-467f-88f5-daeffe5c9938-config-data\") pod \"barbican-api-7fb85f479d-hgd4m\" (UID: \"60b78f2d-a541-467f-88f5-daeffe5c9938\") " pod="openstack/barbican-api-7fb85f479d-hgd4m" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.553439 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jvcf\" (UniqueName: \"kubernetes.io/projected/60b78f2d-a541-467f-88f5-daeffe5c9938-kube-api-access-9jvcf\") pod \"barbican-api-7fb85f479d-hgd4m\" (UID: \"60b78f2d-a541-467f-88f5-daeffe5c9938\") " pod="openstack/barbican-api-7fb85f479d-hgd4m" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.585248 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-546dc675b-x2vpf" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.651274 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-58nhn" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.655609 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60b78f2d-a541-467f-88f5-daeffe5c9938-combined-ca-bundle\") pod \"barbican-api-7fb85f479d-hgd4m\" (UID: \"60b78f2d-a541-467f-88f5-daeffe5c9938\") " pod="openstack/barbican-api-7fb85f479d-hgd4m" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.655672 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60b78f2d-a541-467f-88f5-daeffe5c9938-config-data\") pod \"barbican-api-7fb85f479d-hgd4m\" (UID: \"60b78f2d-a541-467f-88f5-daeffe5c9938\") " pod="openstack/barbican-api-7fb85f479d-hgd4m" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.655705 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jvcf\" (UniqueName: \"kubernetes.io/projected/60b78f2d-a541-467f-88f5-daeffe5c9938-kube-api-access-9jvcf\") pod \"barbican-api-7fb85f479d-hgd4m\" (UID: \"60b78f2d-a541-467f-88f5-daeffe5c9938\") " pod="openstack/barbican-api-7fb85f479d-hgd4m" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.655746 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60b78f2d-a541-467f-88f5-daeffe5c9938-logs\") pod \"barbican-api-7fb85f479d-hgd4m\" (UID: \"60b78f2d-a541-467f-88f5-daeffe5c9938\") " pod="openstack/barbican-api-7fb85f479d-hgd4m" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.655769 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/60b78f2d-a541-467f-88f5-daeffe5c9938-config-data-custom\") pod \"barbican-api-7fb85f479d-hgd4m\" (UID: \"60b78f2d-a541-467f-88f5-daeffe5c9938\") " pod="openstack/barbican-api-7fb85f479d-hgd4m" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.658942 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60b78f2d-a541-467f-88f5-daeffe5c9938-logs\") pod \"barbican-api-7fb85f479d-hgd4m\" (UID: \"60b78f2d-a541-467f-88f5-daeffe5c9938\") " pod="openstack/barbican-api-7fb85f479d-hgd4m" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.663633 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60b78f2d-a541-467f-88f5-daeffe5c9938-combined-ca-bundle\") pod \"barbican-api-7fb85f479d-hgd4m\" (UID: \"60b78f2d-a541-467f-88f5-daeffe5c9938\") " pod="openstack/barbican-api-7fb85f479d-hgd4m" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.668738 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60b78f2d-a541-467f-88f5-daeffe5c9938-config-data\") pod \"barbican-api-7fb85f479d-hgd4m\" (UID: \"60b78f2d-a541-467f-88f5-daeffe5c9938\") " pod="openstack/barbican-api-7fb85f479d-hgd4m" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.677976 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/60b78f2d-a541-467f-88f5-daeffe5c9938-config-data-custom\") pod \"barbican-api-7fb85f479d-hgd4m\" (UID: \"60b78f2d-a541-467f-88f5-daeffe5c9938\") " pod="openstack/barbican-api-7fb85f479d-hgd4m" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.697105 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jvcf\" (UniqueName: \"kubernetes.io/projected/60b78f2d-a541-467f-88f5-daeffe5c9938-kube-api-access-9jvcf\") pod \"barbican-api-7fb85f479d-hgd4m\" (UID: \"60b78f2d-a541-467f-88f5-daeffe5c9938\") " pod="openstack/barbican-api-7fb85f479d-hgd4m" Nov 24 11:47:09 crc kubenswrapper[4789]: I1124 11:47:09.856103 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7fb85f479d-hgd4m" Nov 24 11:47:10 crc kubenswrapper[4789]: I1124 11:47:10.027499 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6777ddb46-lfh4x"] Nov 24 11:47:10 crc kubenswrapper[4789]: I1124 11:47:10.191540 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e2acd55-a485-43c8-b3e5-88083c626aa0" path="/var/lib/kubelet/pods/4e2acd55-a485-43c8-b3e5-88083c626aa0/volumes" Nov 24 11:47:10 crc kubenswrapper[4789]: I1124 11:47:10.326106 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-546dc675b-x2vpf"] Nov 24 11:47:10 crc kubenswrapper[4789]: I1124 11:47:10.421525 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-58nhn"] Nov 24 11:47:10 crc kubenswrapper[4789]: I1124 11:47:10.434855 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7c6b6fc77f-wrz6s"] Nov 24 11:47:10 crc kubenswrapper[4789]: W1124 11:47:10.448635 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a3e8f3b_bcd4_4911_b365_e02bad3e8611.slice/crio-bbe0be573246d911f2cbbfecee091e6ada1215069c89ad029836cd5221049621 WatchSource:0}: Error finding container bbe0be573246d911f2cbbfecee091e6ada1215069c89ad029836cd5221049621: Status 404 returned error can't find the container with id bbe0be573246d911f2cbbfecee091e6ada1215069c89ad029836cd5221049621 Nov 24 11:47:10 crc kubenswrapper[4789]: I1124 11:47:10.628313 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7fb85f479d-hgd4m"] Nov 24 11:47:10 crc kubenswrapper[4789]: I1124 11:47:10.644982 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7c6b6fc77f-wrz6s" event={"ID":"6a3e8f3b-bcd4-4911-b365-e02bad3e8611","Type":"ContainerStarted","Data":"bbe0be573246d911f2cbbfecee091e6ada1215069c89ad029836cd5221049621"} Nov 24 11:47:10 crc kubenswrapper[4789]: I1124 11:47:10.649948 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-58nhn" event={"ID":"de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a","Type":"ContainerStarted","Data":"5c85944eff51eecdc4ffa78514fd557c4490690806bb69721d6d2d98830cf596"} Nov 24 11:47:10 crc kubenswrapper[4789]: I1124 11:47:10.656159 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-546dc675b-x2vpf" event={"ID":"27b80dec-87d3-4357-a667-60524f89de21","Type":"ContainerStarted","Data":"d88a5026e1be49f79cff7a96f1617ce66eafc9d8e74e40d5253a59217516fada"} Nov 24 11:47:10 crc kubenswrapper[4789]: I1124 11:47:10.658024 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6777ddb46-lfh4x" event={"ID":"e6858fb3-9f7e-4855-abd4-23fdc894d153","Type":"ContainerStarted","Data":"a885245dca5965422bce7c3897b59fe39036ef224d71880f671a72979e5f3e25"} Nov 24 11:47:11 crc kubenswrapper[4789]: I1124 11:47:11.669122 4789 generic.go:334] "Generic (PLEG): container finished" podID="de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a" containerID="74c8b61da58db92c18dbc551f2a24a7707f02929ef1131d7c5e233469a577e3b" exitCode=0 Nov 24 11:47:11 crc kubenswrapper[4789]: I1124 11:47:11.669347 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-58nhn" event={"ID":"de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a","Type":"ContainerDied","Data":"74c8b61da58db92c18dbc551f2a24a7707f02929ef1131d7c5e233469a577e3b"} Nov 24 11:47:11 crc kubenswrapper[4789]: I1124 11:47:11.675214 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-546dc675b-x2vpf" event={"ID":"27b80dec-87d3-4357-a667-60524f89de21","Type":"ContainerStarted","Data":"e7eabbcc4b2f7c35fcc72bdb2731f57defa3022a15079cce9eb9caf59ff11423"} Nov 24 11:47:11 crc kubenswrapper[4789]: I1124 11:47:11.675257 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-546dc675b-x2vpf" event={"ID":"27b80dec-87d3-4357-a667-60524f89de21","Type":"ContainerStarted","Data":"6310861c88d8b017596af2d798120d90df2d6849a0c0bd5ff50cd729528de629"} Nov 24 11:47:11 crc kubenswrapper[4789]: I1124 11:47:11.675391 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-546dc675b-x2vpf" Nov 24 11:47:11 crc kubenswrapper[4789]: I1124 11:47:11.675427 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-546dc675b-x2vpf" Nov 24 11:47:11 crc kubenswrapper[4789]: I1124 11:47:11.687297 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7fb85f479d-hgd4m" event={"ID":"60b78f2d-a541-467f-88f5-daeffe5c9938","Type":"ContainerStarted","Data":"363d2437e156c563d46e20b1821797034ecf1988ba03aab8e712e59199451a36"} Nov 24 11:47:11 crc kubenswrapper[4789]: I1124 11:47:11.687347 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7fb85f479d-hgd4m" event={"ID":"60b78f2d-a541-467f-88f5-daeffe5c9938","Type":"ContainerStarted","Data":"0dcfbd283524599b48d4a2bbe3ec153b4bbca446445be7b850b4df511b0f4111"} Nov 24 11:47:11 crc kubenswrapper[4789]: I1124 11:47:11.687359 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7fb85f479d-hgd4m" event={"ID":"60b78f2d-a541-467f-88f5-daeffe5c9938","Type":"ContainerStarted","Data":"d5d0127585b3c250417173d076e91f2bf5b4f395073a269f3033795b2b4c0587"} Nov 24 11:47:11 crc kubenswrapper[4789]: I1124 11:47:11.688143 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7fb85f479d-hgd4m" Nov 24 11:47:11 crc kubenswrapper[4789]: I1124 11:47:11.688164 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7fb85f479d-hgd4m" Nov 24 11:47:11 crc kubenswrapper[4789]: I1124 11:47:11.770152 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7fb85f479d-hgd4m" podStartSLOduration=2.770132192 podStartE2EDuration="2.770132192s" podCreationTimestamp="2025-11-24 11:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:47:11.746774827 +0000 UTC m=+1014.329246206" watchObservedRunningTime="2025-11-24 11:47:11.770132192 +0000 UTC m=+1014.352603571" Nov 24 11:47:11 crc kubenswrapper[4789]: I1124 11:47:11.775178 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-546dc675b-x2vpf" podStartSLOduration=3.775156403 podStartE2EDuration="3.775156403s" podCreationTimestamp="2025-11-24 11:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:47:11.768074562 +0000 UTC m=+1014.350545941" watchObservedRunningTime="2025-11-24 11:47:11.775156403 +0000 UTC m=+1014.357627772" Nov 24 11:47:12 crc kubenswrapper[4789]: I1124 11:47:12.711859 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-8656dd4674-kcg9p"] Nov 24 11:47:12 crc kubenswrapper[4789]: I1124 11:47:12.713445 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8656dd4674-kcg9p" Nov 24 11:47:12 crc kubenswrapper[4789]: I1124 11:47:12.716024 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 24 11:47:12 crc kubenswrapper[4789]: I1124 11:47:12.716288 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 24 11:47:12 crc kubenswrapper[4789]: I1124 11:47:12.725417 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8656dd4674-kcg9p"] Nov 24 11:47:12 crc kubenswrapper[4789]: I1124 11:47:12.840861 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ea02afa-6da7-4e18-ae3f-7110a7b652f3-public-tls-certs\") pod \"barbican-api-8656dd4674-kcg9p\" (UID: \"6ea02afa-6da7-4e18-ae3f-7110a7b652f3\") " pod="openstack/barbican-api-8656dd4674-kcg9p" Nov 24 11:47:12 crc kubenswrapper[4789]: I1124 11:47:12.840960 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6ea02afa-6da7-4e18-ae3f-7110a7b652f3-config-data-custom\") pod \"barbican-api-8656dd4674-kcg9p\" (UID: \"6ea02afa-6da7-4e18-ae3f-7110a7b652f3\") " pod="openstack/barbican-api-8656dd4674-kcg9p" Nov 24 11:47:12 crc kubenswrapper[4789]: I1124 11:47:12.840990 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ea02afa-6da7-4e18-ae3f-7110a7b652f3-combined-ca-bundle\") pod \"barbican-api-8656dd4674-kcg9p\" (UID: \"6ea02afa-6da7-4e18-ae3f-7110a7b652f3\") " pod="openstack/barbican-api-8656dd4674-kcg9p" Nov 24 11:47:12 crc kubenswrapper[4789]: I1124 11:47:12.841033 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ea02afa-6da7-4e18-ae3f-7110a7b652f3-logs\") pod \"barbican-api-8656dd4674-kcg9p\" (UID: \"6ea02afa-6da7-4e18-ae3f-7110a7b652f3\") " pod="openstack/barbican-api-8656dd4674-kcg9p" Nov 24 11:47:12 crc kubenswrapper[4789]: I1124 11:47:12.841062 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbwfh\" (UniqueName: \"kubernetes.io/projected/6ea02afa-6da7-4e18-ae3f-7110a7b652f3-kube-api-access-dbwfh\") pod \"barbican-api-8656dd4674-kcg9p\" (UID: \"6ea02afa-6da7-4e18-ae3f-7110a7b652f3\") " pod="openstack/barbican-api-8656dd4674-kcg9p" Nov 24 11:47:12 crc kubenswrapper[4789]: I1124 11:47:12.841109 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ea02afa-6da7-4e18-ae3f-7110a7b652f3-config-data\") pod \"barbican-api-8656dd4674-kcg9p\" (UID: \"6ea02afa-6da7-4e18-ae3f-7110a7b652f3\") " pod="openstack/barbican-api-8656dd4674-kcg9p" Nov 24 11:47:12 crc kubenswrapper[4789]: I1124 11:47:12.841181 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ea02afa-6da7-4e18-ae3f-7110a7b652f3-internal-tls-certs\") pod \"barbican-api-8656dd4674-kcg9p\" (UID: \"6ea02afa-6da7-4e18-ae3f-7110a7b652f3\") " pod="openstack/barbican-api-8656dd4674-kcg9p" Nov 24 11:47:12 crc kubenswrapper[4789]: I1124 11:47:12.942686 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ea02afa-6da7-4e18-ae3f-7110a7b652f3-internal-tls-certs\") pod \"barbican-api-8656dd4674-kcg9p\" (UID: \"6ea02afa-6da7-4e18-ae3f-7110a7b652f3\") " pod="openstack/barbican-api-8656dd4674-kcg9p" Nov 24 11:47:12 crc kubenswrapper[4789]: I1124 11:47:12.942740 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ea02afa-6da7-4e18-ae3f-7110a7b652f3-public-tls-certs\") pod \"barbican-api-8656dd4674-kcg9p\" (UID: \"6ea02afa-6da7-4e18-ae3f-7110a7b652f3\") " pod="openstack/barbican-api-8656dd4674-kcg9p" Nov 24 11:47:12 crc kubenswrapper[4789]: I1124 11:47:12.942775 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6ea02afa-6da7-4e18-ae3f-7110a7b652f3-config-data-custom\") pod \"barbican-api-8656dd4674-kcg9p\" (UID: \"6ea02afa-6da7-4e18-ae3f-7110a7b652f3\") " pod="openstack/barbican-api-8656dd4674-kcg9p" Nov 24 11:47:12 crc kubenswrapper[4789]: I1124 11:47:12.942824 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ea02afa-6da7-4e18-ae3f-7110a7b652f3-combined-ca-bundle\") pod \"barbican-api-8656dd4674-kcg9p\" (UID: \"6ea02afa-6da7-4e18-ae3f-7110a7b652f3\") " pod="openstack/barbican-api-8656dd4674-kcg9p" Nov 24 11:47:12 crc kubenswrapper[4789]: I1124 11:47:12.942852 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ea02afa-6da7-4e18-ae3f-7110a7b652f3-logs\") pod \"barbican-api-8656dd4674-kcg9p\" (UID: \"6ea02afa-6da7-4e18-ae3f-7110a7b652f3\") " pod="openstack/barbican-api-8656dd4674-kcg9p" Nov 24 11:47:12 crc kubenswrapper[4789]: I1124 11:47:12.942883 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbwfh\" (UniqueName: \"kubernetes.io/projected/6ea02afa-6da7-4e18-ae3f-7110a7b652f3-kube-api-access-dbwfh\") pod \"barbican-api-8656dd4674-kcg9p\" (UID: \"6ea02afa-6da7-4e18-ae3f-7110a7b652f3\") " pod="openstack/barbican-api-8656dd4674-kcg9p" Nov 24 11:47:12 crc kubenswrapper[4789]: I1124 11:47:12.942916 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ea02afa-6da7-4e18-ae3f-7110a7b652f3-config-data\") pod \"barbican-api-8656dd4674-kcg9p\" (UID: \"6ea02afa-6da7-4e18-ae3f-7110a7b652f3\") " pod="openstack/barbican-api-8656dd4674-kcg9p" Nov 24 11:47:12 crc kubenswrapper[4789]: I1124 11:47:12.944041 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ea02afa-6da7-4e18-ae3f-7110a7b652f3-logs\") pod \"barbican-api-8656dd4674-kcg9p\" (UID: \"6ea02afa-6da7-4e18-ae3f-7110a7b652f3\") " pod="openstack/barbican-api-8656dd4674-kcg9p" Nov 24 11:47:12 crc kubenswrapper[4789]: I1124 11:47:12.954805 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ea02afa-6da7-4e18-ae3f-7110a7b652f3-combined-ca-bundle\") pod \"barbican-api-8656dd4674-kcg9p\" (UID: \"6ea02afa-6da7-4e18-ae3f-7110a7b652f3\") " pod="openstack/barbican-api-8656dd4674-kcg9p" Nov 24 11:47:12 crc kubenswrapper[4789]: I1124 11:47:12.962503 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ea02afa-6da7-4e18-ae3f-7110a7b652f3-public-tls-certs\") pod \"barbican-api-8656dd4674-kcg9p\" (UID: \"6ea02afa-6da7-4e18-ae3f-7110a7b652f3\") " pod="openstack/barbican-api-8656dd4674-kcg9p" Nov 24 11:47:12 crc kubenswrapper[4789]: I1124 11:47:12.967136 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbwfh\" (UniqueName: \"kubernetes.io/projected/6ea02afa-6da7-4e18-ae3f-7110a7b652f3-kube-api-access-dbwfh\") pod \"barbican-api-8656dd4674-kcg9p\" (UID: \"6ea02afa-6da7-4e18-ae3f-7110a7b652f3\") " pod="openstack/barbican-api-8656dd4674-kcg9p" Nov 24 11:47:12 crc kubenswrapper[4789]: I1124 11:47:12.969948 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6ea02afa-6da7-4e18-ae3f-7110a7b652f3-config-data-custom\") pod \"barbican-api-8656dd4674-kcg9p\" (UID: \"6ea02afa-6da7-4e18-ae3f-7110a7b652f3\") " pod="openstack/barbican-api-8656dd4674-kcg9p" Nov 24 11:47:12 crc kubenswrapper[4789]: I1124 11:47:12.970090 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ea02afa-6da7-4e18-ae3f-7110a7b652f3-internal-tls-certs\") pod \"barbican-api-8656dd4674-kcg9p\" (UID: \"6ea02afa-6da7-4e18-ae3f-7110a7b652f3\") " pod="openstack/barbican-api-8656dd4674-kcg9p" Nov 24 11:47:12 crc kubenswrapper[4789]: I1124 11:47:12.972710 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ea02afa-6da7-4e18-ae3f-7110a7b652f3-config-data\") pod \"barbican-api-8656dd4674-kcg9p\" (UID: \"6ea02afa-6da7-4e18-ae3f-7110a7b652f3\") " pod="openstack/barbican-api-8656dd4674-kcg9p" Nov 24 11:47:13 crc kubenswrapper[4789]: I1124 11:47:13.040706 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8656dd4674-kcg9p" Nov 24 11:47:13 crc kubenswrapper[4789]: I1124 11:47:13.572674 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8656dd4674-kcg9p"] Nov 24 11:47:13 crc kubenswrapper[4789]: W1124 11:47:13.582242 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ea02afa_6da7_4e18_ae3f_7110a7b652f3.slice/crio-03a1a001759e11ec79618c323704cd5cd731dcc558fb20eaebebf9b7af18106a WatchSource:0}: Error finding container 03a1a001759e11ec79618c323704cd5cd731dcc558fb20eaebebf9b7af18106a: Status 404 returned error can't find the container with id 03a1a001759e11ec79618c323704cd5cd731dcc558fb20eaebebf9b7af18106a Nov 24 11:47:13 crc kubenswrapper[4789]: I1124 11:47:13.721140 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8656dd4674-kcg9p" event={"ID":"6ea02afa-6da7-4e18-ae3f-7110a7b652f3","Type":"ContainerStarted","Data":"03a1a001759e11ec79618c323704cd5cd731dcc558fb20eaebebf9b7af18106a"} Nov 24 11:47:13 crc kubenswrapper[4789]: I1124 11:47:13.725010 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6777ddb46-lfh4x" event={"ID":"e6858fb3-9f7e-4855-abd4-23fdc894d153","Type":"ContainerStarted","Data":"9b688d56208d1f8eccb66fa532a0d34f39800513d9bdf49cf2c71a5b755ad5c3"} Nov 24 11:47:13 crc kubenswrapper[4789]: I1124 11:47:13.725071 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6777ddb46-lfh4x" event={"ID":"e6858fb3-9f7e-4855-abd4-23fdc894d153","Type":"ContainerStarted","Data":"3db615a01039d7efedc6416d9dd8c8b32efc93307a6fd4b513c6ec29721605a4"} Nov 24 11:47:13 crc kubenswrapper[4789]: I1124 11:47:13.739651 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7c6b6fc77f-wrz6s" event={"ID":"6a3e8f3b-bcd4-4911-b365-e02bad3e8611","Type":"ContainerStarted","Data":"5c5c14c8617ff11060f0dabade2e0f71f2edfe5e2894a958bb619a5c2d3f128d"} Nov 24 11:47:13 crc kubenswrapper[4789]: I1124 11:47:13.739730 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7c6b6fc77f-wrz6s" event={"ID":"6a3e8f3b-bcd4-4911-b365-e02bad3e8611","Type":"ContainerStarted","Data":"cd7f1277f45329b2afe1ee45c15405a39bccc86bb13abe6b0363760b2e5eb101"} Nov 24 11:47:13 crc kubenswrapper[4789]: I1124 11:47:13.754877 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6777ddb46-lfh4x" podStartSLOduration=2.959985577 podStartE2EDuration="5.754857243s" podCreationTimestamp="2025-11-24 11:47:08 +0000 UTC" firstStartedPulling="2025-11-24 11:47:10.056202047 +0000 UTC m=+1012.638673426" lastFinishedPulling="2025-11-24 11:47:12.851073713 +0000 UTC m=+1015.433545092" observedRunningTime="2025-11-24 11:47:13.747186068 +0000 UTC m=+1016.329657447" watchObservedRunningTime="2025-11-24 11:47:13.754857243 +0000 UTC m=+1016.337328612" Nov 24 11:47:13 crc kubenswrapper[4789]: I1124 11:47:13.761798 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-58nhn" event={"ID":"de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a","Type":"ContainerStarted","Data":"0b00b0a4e636538f89d08a93d6b9d148448bae0fe641e152fa3e18985e622e2d"} Nov 24 11:47:13 crc kubenswrapper[4789]: I1124 11:47:13.762593 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-869f779d85-58nhn" Nov 24 11:47:13 crc kubenswrapper[4789]: I1124 11:47:13.768656 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7c6b6fc77f-wrz6s" podStartSLOduration=3.343875852 podStartE2EDuration="5.768638707s" podCreationTimestamp="2025-11-24 11:47:08 +0000 UTC" firstStartedPulling="2025-11-24 11:47:10.451965838 +0000 UTC m=+1013.034437217" lastFinishedPulling="2025-11-24 11:47:12.876728693 +0000 UTC m=+1015.459200072" observedRunningTime="2025-11-24 11:47:13.764811214 +0000 UTC m=+1016.347282593" watchObservedRunningTime="2025-11-24 11:47:13.768638707 +0000 UTC m=+1016.351110086" Nov 24 11:47:13 crc kubenswrapper[4789]: I1124 11:47:13.789955 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-869f779d85-58nhn" podStartSLOduration=5.789938292 podStartE2EDuration="5.789938292s" podCreationTimestamp="2025-11-24 11:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:47:13.781891588 +0000 UTC m=+1016.364362967" watchObservedRunningTime="2025-11-24 11:47:13.789938292 +0000 UTC m=+1016.372409671" Nov 24 11:47:14 crc kubenswrapper[4789]: I1124 11:47:14.773885 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8656dd4674-kcg9p" event={"ID":"6ea02afa-6da7-4e18-ae3f-7110a7b652f3","Type":"ContainerStarted","Data":"32c01bbed082025fbeb506403b05f9984f068bc6511f7db2eb58dd6332b082fc"} Nov 24 11:47:14 crc kubenswrapper[4789]: I1124 11:47:14.774219 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8656dd4674-kcg9p" event={"ID":"6ea02afa-6da7-4e18-ae3f-7110a7b652f3","Type":"ContainerStarted","Data":"dddcefc24f5f3af778ee042bb2e801508e0def81df78bc9c30f88861e5ceaaf0"} Nov 24 11:47:14 crc kubenswrapper[4789]: I1124 11:47:14.798602 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-8656dd4674-kcg9p" podStartSLOduration=2.798582113 podStartE2EDuration="2.798582113s" podCreationTimestamp="2025-11-24 11:47:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:47:14.79558131 +0000 UTC m=+1017.378052709" watchObservedRunningTime="2025-11-24 11:47:14.798582113 +0000 UTC m=+1017.381053492" Nov 24 11:47:15 crc kubenswrapper[4789]: I1124 11:47:15.782415 4789 generic.go:334] "Generic (PLEG): container finished" podID="2e41ad3b-8d25-49db-8c15-4a3a57f47e2f" containerID="fc9dbd6cb35e285eb0ce34d2a43fff15cbd426d38903163e2a96ce8b8b9c011c" exitCode=0 Nov 24 11:47:15 crc kubenswrapper[4789]: I1124 11:47:15.782598 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-msb22" event={"ID":"2e41ad3b-8d25-49db-8c15-4a3a57f47e2f","Type":"ContainerDied","Data":"fc9dbd6cb35e285eb0ce34d2a43fff15cbd426d38903163e2a96ce8b8b9c011c"} Nov 24 11:47:15 crc kubenswrapper[4789]: I1124 11:47:15.783657 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8656dd4674-kcg9p" Nov 24 11:47:15 crc kubenswrapper[4789]: I1124 11:47:15.783811 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8656dd4674-kcg9p" Nov 24 11:47:18 crc kubenswrapper[4789]: I1124 11:47:18.490197 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-msb22" Nov 24 11:47:18 crc kubenswrapper[4789]: I1124 11:47:18.552390 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2e41ad3b-8d25-49db-8c15-4a3a57f47e2f-db-sync-config-data\") pod \"2e41ad3b-8d25-49db-8c15-4a3a57f47e2f\" (UID: \"2e41ad3b-8d25-49db-8c15-4a3a57f47e2f\") " Nov 24 11:47:18 crc kubenswrapper[4789]: I1124 11:47:18.552450 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e41ad3b-8d25-49db-8c15-4a3a57f47e2f-config-data\") pod \"2e41ad3b-8d25-49db-8c15-4a3a57f47e2f\" (UID: \"2e41ad3b-8d25-49db-8c15-4a3a57f47e2f\") " Nov 24 11:47:18 crc kubenswrapper[4789]: I1124 11:47:18.552542 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e41ad3b-8d25-49db-8c15-4a3a57f47e2f-combined-ca-bundle\") pod \"2e41ad3b-8d25-49db-8c15-4a3a57f47e2f\" (UID: \"2e41ad3b-8d25-49db-8c15-4a3a57f47e2f\") " Nov 24 11:47:18 crc kubenswrapper[4789]: I1124 11:47:18.552660 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e41ad3b-8d25-49db-8c15-4a3a57f47e2f-scripts\") pod \"2e41ad3b-8d25-49db-8c15-4a3a57f47e2f\" (UID: \"2e41ad3b-8d25-49db-8c15-4a3a57f47e2f\") " Nov 24 11:47:18 crc kubenswrapper[4789]: I1124 11:47:18.552699 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2e41ad3b-8d25-49db-8c15-4a3a57f47e2f-etc-machine-id\") pod \"2e41ad3b-8d25-49db-8c15-4a3a57f47e2f\" (UID: \"2e41ad3b-8d25-49db-8c15-4a3a57f47e2f\") " Nov 24 11:47:18 crc kubenswrapper[4789]: I1124 11:47:18.552740 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9mc8\" (UniqueName: \"kubernetes.io/projected/2e41ad3b-8d25-49db-8c15-4a3a57f47e2f-kube-api-access-s9mc8\") pod \"2e41ad3b-8d25-49db-8c15-4a3a57f47e2f\" (UID: \"2e41ad3b-8d25-49db-8c15-4a3a57f47e2f\") " Nov 24 11:47:18 crc kubenswrapper[4789]: I1124 11:47:18.555652 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e41ad3b-8d25-49db-8c15-4a3a57f47e2f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2e41ad3b-8d25-49db-8c15-4a3a57f47e2f" (UID: "2e41ad3b-8d25-49db-8c15-4a3a57f47e2f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:47:18 crc kubenswrapper[4789]: I1124 11:47:18.558701 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e41ad3b-8d25-49db-8c15-4a3a57f47e2f-kube-api-access-s9mc8" (OuterVolumeSpecName: "kube-api-access-s9mc8") pod "2e41ad3b-8d25-49db-8c15-4a3a57f47e2f" (UID: "2e41ad3b-8d25-49db-8c15-4a3a57f47e2f"). InnerVolumeSpecName "kube-api-access-s9mc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:47:18 crc kubenswrapper[4789]: I1124 11:47:18.569356 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e41ad3b-8d25-49db-8c15-4a3a57f47e2f-scripts" (OuterVolumeSpecName: "scripts") pod "2e41ad3b-8d25-49db-8c15-4a3a57f47e2f" (UID: "2e41ad3b-8d25-49db-8c15-4a3a57f47e2f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:18 crc kubenswrapper[4789]: I1124 11:47:18.581299 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e41ad3b-8d25-49db-8c15-4a3a57f47e2f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2e41ad3b-8d25-49db-8c15-4a3a57f47e2f" (UID: "2e41ad3b-8d25-49db-8c15-4a3a57f47e2f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:18 crc kubenswrapper[4789]: I1124 11:47:18.588541 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e41ad3b-8d25-49db-8c15-4a3a57f47e2f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e41ad3b-8d25-49db-8c15-4a3a57f47e2f" (UID: "2e41ad3b-8d25-49db-8c15-4a3a57f47e2f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:18 crc kubenswrapper[4789]: I1124 11:47:18.611699 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e41ad3b-8d25-49db-8c15-4a3a57f47e2f-config-data" (OuterVolumeSpecName: "config-data") pod "2e41ad3b-8d25-49db-8c15-4a3a57f47e2f" (UID: "2e41ad3b-8d25-49db-8c15-4a3a57f47e2f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:18 crc kubenswrapper[4789]: I1124 11:47:18.654919 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e41ad3b-8d25-49db-8c15-4a3a57f47e2f-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:18 crc kubenswrapper[4789]: I1124 11:47:18.654997 4789 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2e41ad3b-8d25-49db-8c15-4a3a57f47e2f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:18 crc kubenswrapper[4789]: I1124 11:47:18.655018 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9mc8\" (UniqueName: \"kubernetes.io/projected/2e41ad3b-8d25-49db-8c15-4a3a57f47e2f-kube-api-access-s9mc8\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:18 crc kubenswrapper[4789]: I1124 11:47:18.655031 4789 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2e41ad3b-8d25-49db-8c15-4a3a57f47e2f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:18 crc kubenswrapper[4789]: I1124 11:47:18.655043 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e41ad3b-8d25-49db-8c15-4a3a57f47e2f-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:18 crc kubenswrapper[4789]: I1124 11:47:18.655132 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e41ad3b-8d25-49db-8c15-4a3a57f47e2f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:18 crc kubenswrapper[4789]: I1124 11:47:18.815749 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-msb22" event={"ID":"2e41ad3b-8d25-49db-8c15-4a3a57f47e2f","Type":"ContainerDied","Data":"4280a6e1c5950e3b00092cd12076c9b1481e5259c798782b177a411d6dd30963"} Nov 24 11:47:18 crc kubenswrapper[4789]: I1124 11:47:18.815788 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-msb22" Nov 24 11:47:18 crc kubenswrapper[4789]: I1124 11:47:18.815792 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4280a6e1c5950e3b00092cd12076c9b1481e5259c798782b177a411d6dd30963" Nov 24 11:47:19 crc kubenswrapper[4789]: I1124 11:47:19.615261 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8656dd4674-kcg9p" Nov 24 11:47:19 crc kubenswrapper[4789]: I1124 11:47:19.653656 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-869f779d85-58nhn" Nov 24 11:47:19 crc kubenswrapper[4789]: I1124 11:47:19.747197 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-zr4gs"] Nov 24 11:47:19 crc kubenswrapper[4789]: I1124 11:47:19.747423 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f66db59b9-zr4gs" podUID="cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b" containerName="dnsmasq-dns" containerID="cri-o://d2c5daf18048616e070af03b7ea9db79794daa5581f4511463b00e63b3f98633" gracePeriod=10 Nov 24 11:47:19 crc kubenswrapper[4789]: I1124 11:47:19.862132 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 11:47:19 crc kubenswrapper[4789]: E1124 11:47:19.865916 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e41ad3b-8d25-49db-8c15-4a3a57f47e2f" containerName="cinder-db-sync" Nov 24 11:47:19 crc kubenswrapper[4789]: I1124 11:47:19.865942 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e41ad3b-8d25-49db-8c15-4a3a57f47e2f" containerName="cinder-db-sync" Nov 24 11:47:19 crc kubenswrapper[4789]: I1124 11:47:19.866107 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e41ad3b-8d25-49db-8c15-4a3a57f47e2f" containerName="cinder-db-sync" Nov 24 11:47:19 crc kubenswrapper[4789]: I1124 11:47:19.866983 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 11:47:19 crc kubenswrapper[4789]: I1124 11:47:19.870547 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 24 11:47:19 crc kubenswrapper[4789]: I1124 11:47:19.870759 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-7smvg" Nov 24 11:47:19 crc kubenswrapper[4789]: I1124 11:47:19.870913 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 24 11:47:19 crc kubenswrapper[4789]: I1124 11:47:19.871062 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 24 11:47:19 crc kubenswrapper[4789]: I1124 11:47:19.885876 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 11:47:19 crc kubenswrapper[4789]: I1124 11:47:19.981605 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16f2b7dc-63ee-4cc6-8787-2b15971d30b5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"16f2b7dc-63ee-4cc6-8787-2b15971d30b5\") " pod="openstack/cinder-scheduler-0" Nov 24 11:47:19 crc kubenswrapper[4789]: I1124 11:47:19.981824 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16f2b7dc-63ee-4cc6-8787-2b15971d30b5-scripts\") pod \"cinder-scheduler-0\" (UID: \"16f2b7dc-63ee-4cc6-8787-2b15971d30b5\") " pod="openstack/cinder-scheduler-0" Nov 24 11:47:19 crc kubenswrapper[4789]: I1124 11:47:19.981913 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16f2b7dc-63ee-4cc6-8787-2b15971d30b5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"16f2b7dc-63ee-4cc6-8787-2b15971d30b5\") " pod="openstack/cinder-scheduler-0" Nov 24 11:47:19 crc kubenswrapper[4789]: I1124 11:47:19.982006 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16f2b7dc-63ee-4cc6-8787-2b15971d30b5-config-data\") pod \"cinder-scheduler-0\" (UID: \"16f2b7dc-63ee-4cc6-8787-2b15971d30b5\") " pod="openstack/cinder-scheduler-0" Nov 24 11:47:19 crc kubenswrapper[4789]: I1124 11:47:19.982088 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q694b\" (UniqueName: \"kubernetes.io/projected/16f2b7dc-63ee-4cc6-8787-2b15971d30b5-kube-api-access-q694b\") pod \"cinder-scheduler-0\" (UID: \"16f2b7dc-63ee-4cc6-8787-2b15971d30b5\") " pod="openstack/cinder-scheduler-0" Nov 24 11:47:19 crc kubenswrapper[4789]: I1124 11:47:19.982196 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/16f2b7dc-63ee-4cc6-8787-2b15971d30b5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"16f2b7dc-63ee-4cc6-8787-2b15971d30b5\") " pod="openstack/cinder-scheduler-0" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.038754 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-n5hqj"] Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.040120 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-n5hqj" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.065542 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-n5hqj"] Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.083279 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16f2b7dc-63ee-4cc6-8787-2b15971d30b5-config-data\") pod \"cinder-scheduler-0\" (UID: \"16f2b7dc-63ee-4cc6-8787-2b15971d30b5\") " pod="openstack/cinder-scheduler-0" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.083344 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q694b\" (UniqueName: \"kubernetes.io/projected/16f2b7dc-63ee-4cc6-8787-2b15971d30b5-kube-api-access-q694b\") pod \"cinder-scheduler-0\" (UID: \"16f2b7dc-63ee-4cc6-8787-2b15971d30b5\") " pod="openstack/cinder-scheduler-0" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.083416 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/16f2b7dc-63ee-4cc6-8787-2b15971d30b5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"16f2b7dc-63ee-4cc6-8787-2b15971d30b5\") " pod="openstack/cinder-scheduler-0" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.083497 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16f2b7dc-63ee-4cc6-8787-2b15971d30b5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"16f2b7dc-63ee-4cc6-8787-2b15971d30b5\") " pod="openstack/cinder-scheduler-0" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.083520 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16f2b7dc-63ee-4cc6-8787-2b15971d30b5-scripts\") pod \"cinder-scheduler-0\" (UID: \"16f2b7dc-63ee-4cc6-8787-2b15971d30b5\") " pod="openstack/cinder-scheduler-0" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.083552 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16f2b7dc-63ee-4cc6-8787-2b15971d30b5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"16f2b7dc-63ee-4cc6-8787-2b15971d30b5\") " pod="openstack/cinder-scheduler-0" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.083944 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/16f2b7dc-63ee-4cc6-8787-2b15971d30b5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"16f2b7dc-63ee-4cc6-8787-2b15971d30b5\") " pod="openstack/cinder-scheduler-0" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.096231 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16f2b7dc-63ee-4cc6-8787-2b15971d30b5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"16f2b7dc-63ee-4cc6-8787-2b15971d30b5\") " pod="openstack/cinder-scheduler-0" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.114415 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16f2b7dc-63ee-4cc6-8787-2b15971d30b5-scripts\") pod \"cinder-scheduler-0\" (UID: \"16f2b7dc-63ee-4cc6-8787-2b15971d30b5\") " pod="openstack/cinder-scheduler-0" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.115008 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16f2b7dc-63ee-4cc6-8787-2b15971d30b5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"16f2b7dc-63ee-4cc6-8787-2b15971d30b5\") " pod="openstack/cinder-scheduler-0" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.118258 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16f2b7dc-63ee-4cc6-8787-2b15971d30b5-config-data\") pod \"cinder-scheduler-0\" (UID: \"16f2b7dc-63ee-4cc6-8787-2b15971d30b5\") " pod="openstack/cinder-scheduler-0" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.125050 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q694b\" (UniqueName: \"kubernetes.io/projected/16f2b7dc-63ee-4cc6-8787-2b15971d30b5-kube-api-access-q694b\") pod \"cinder-scheduler-0\" (UID: \"16f2b7dc-63ee-4cc6-8787-2b15971d30b5\") " pod="openstack/cinder-scheduler-0" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.189366 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj5rs\" (UniqueName: \"kubernetes.io/projected/e6978127-8354-4009-af79-a96fc2e47c9f-kube-api-access-cj5rs\") pod \"dnsmasq-dns-58db5546cc-n5hqj\" (UID: \"e6978127-8354-4009-af79-a96fc2e47c9f\") " pod="openstack/dnsmasq-dns-58db5546cc-n5hqj" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.189471 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6978127-8354-4009-af79-a96fc2e47c9f-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-n5hqj\" (UID: \"e6978127-8354-4009-af79-a96fc2e47c9f\") " pod="openstack/dnsmasq-dns-58db5546cc-n5hqj" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.189505 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6978127-8354-4009-af79-a96fc2e47c9f-dns-svc\") pod \"dnsmasq-dns-58db5546cc-n5hqj\" (UID: \"e6978127-8354-4009-af79-a96fc2e47c9f\") " pod="openstack/dnsmasq-dns-58db5546cc-n5hqj" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.189540 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6978127-8354-4009-af79-a96fc2e47c9f-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-n5hqj\" (UID: \"e6978127-8354-4009-af79-a96fc2e47c9f\") " pod="openstack/dnsmasq-dns-58db5546cc-n5hqj" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.189569 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6978127-8354-4009-af79-a96fc2e47c9f-config\") pod \"dnsmasq-dns-58db5546cc-n5hqj\" (UID: \"e6978127-8354-4009-af79-a96fc2e47c9f\") " pod="openstack/dnsmasq-dns-58db5546cc-n5hqj" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.201858 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.218331 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.238564 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.243506 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.261684 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.294821 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0896441a-c9db-4517-ae60-e0afa4cee74e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0896441a-c9db-4517-ae60-e0afa4cee74e\") " pod="openstack/cinder-api-0" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.294875 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6978127-8354-4009-af79-a96fc2e47c9f-config\") pod \"dnsmasq-dns-58db5546cc-n5hqj\" (UID: \"e6978127-8354-4009-af79-a96fc2e47c9f\") " pod="openstack/dnsmasq-dns-58db5546cc-n5hqj" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.294912 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0896441a-c9db-4517-ae60-e0afa4cee74e-scripts\") pod \"cinder-api-0\" (UID: \"0896441a-c9db-4517-ae60-e0afa4cee74e\") " pod="openstack/cinder-api-0" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.294968 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0896441a-c9db-4517-ae60-e0afa4cee74e-config-data-custom\") pod \"cinder-api-0\" (UID: \"0896441a-c9db-4517-ae60-e0afa4cee74e\") " pod="openstack/cinder-api-0" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.294989 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj5rs\" (UniqueName: \"kubernetes.io/projected/e6978127-8354-4009-af79-a96fc2e47c9f-kube-api-access-cj5rs\") pod \"dnsmasq-dns-58db5546cc-n5hqj\" (UID: \"e6978127-8354-4009-af79-a96fc2e47c9f\") " pod="openstack/dnsmasq-dns-58db5546cc-n5hqj" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.295005 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0896441a-c9db-4517-ae60-e0afa4cee74e-logs\") pod \"cinder-api-0\" (UID: \"0896441a-c9db-4517-ae60-e0afa4cee74e\") " pod="openstack/cinder-api-0" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.295030 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0896441a-c9db-4517-ae60-e0afa4cee74e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0896441a-c9db-4517-ae60-e0afa4cee74e\") " pod="openstack/cinder-api-0" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.295065 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0896441a-c9db-4517-ae60-e0afa4cee74e-config-data\") pod \"cinder-api-0\" (UID: \"0896441a-c9db-4517-ae60-e0afa4cee74e\") " pod="openstack/cinder-api-0" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.295100 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6978127-8354-4009-af79-a96fc2e47c9f-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-n5hqj\" (UID: \"e6978127-8354-4009-af79-a96fc2e47c9f\") " pod="openstack/dnsmasq-dns-58db5546cc-n5hqj" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.295128 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d55kq\" (UniqueName: \"kubernetes.io/projected/0896441a-c9db-4517-ae60-e0afa4cee74e-kube-api-access-d55kq\") pod \"cinder-api-0\" (UID: \"0896441a-c9db-4517-ae60-e0afa4cee74e\") " pod="openstack/cinder-api-0" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.295147 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6978127-8354-4009-af79-a96fc2e47c9f-dns-svc\") pod \"dnsmasq-dns-58db5546cc-n5hqj\" (UID: \"e6978127-8354-4009-af79-a96fc2e47c9f\") " pod="openstack/dnsmasq-dns-58db5546cc-n5hqj" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.295178 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6978127-8354-4009-af79-a96fc2e47c9f-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-n5hqj\" (UID: \"e6978127-8354-4009-af79-a96fc2e47c9f\") " pod="openstack/dnsmasq-dns-58db5546cc-n5hqj" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.296267 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6978127-8354-4009-af79-a96fc2e47c9f-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-n5hqj\" (UID: \"e6978127-8354-4009-af79-a96fc2e47c9f\") " pod="openstack/dnsmasq-dns-58db5546cc-n5hqj" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.297321 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6978127-8354-4009-af79-a96fc2e47c9f-config\") pod \"dnsmasq-dns-58db5546cc-n5hqj\" (UID: \"e6978127-8354-4009-af79-a96fc2e47c9f\") " pod="openstack/dnsmasq-dns-58db5546cc-n5hqj" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.298110 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6978127-8354-4009-af79-a96fc2e47c9f-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-n5hqj\" (UID: \"e6978127-8354-4009-af79-a96fc2e47c9f\") " pod="openstack/dnsmasq-dns-58db5546cc-n5hqj" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.298334 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6978127-8354-4009-af79-a96fc2e47c9f-dns-svc\") pod \"dnsmasq-dns-58db5546cc-n5hqj\" (UID: \"e6978127-8354-4009-af79-a96fc2e47c9f\") " pod="openstack/dnsmasq-dns-58db5546cc-n5hqj" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.340613 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj5rs\" (UniqueName: \"kubernetes.io/projected/e6978127-8354-4009-af79-a96fc2e47c9f-kube-api-access-cj5rs\") pod \"dnsmasq-dns-58db5546cc-n5hqj\" (UID: \"e6978127-8354-4009-af79-a96fc2e47c9f\") " pod="openstack/dnsmasq-dns-58db5546cc-n5hqj" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.373543 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-n5hqj" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.399240 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0896441a-c9db-4517-ae60-e0afa4cee74e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0896441a-c9db-4517-ae60-e0afa4cee74e\") " pod="openstack/cinder-api-0" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.399295 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0896441a-c9db-4517-ae60-e0afa4cee74e-scripts\") pod \"cinder-api-0\" (UID: \"0896441a-c9db-4517-ae60-e0afa4cee74e\") " pod="openstack/cinder-api-0" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.399331 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0896441a-c9db-4517-ae60-e0afa4cee74e-config-data-custom\") pod \"cinder-api-0\" (UID: \"0896441a-c9db-4517-ae60-e0afa4cee74e\") " pod="openstack/cinder-api-0" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.399351 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0896441a-c9db-4517-ae60-e0afa4cee74e-logs\") pod \"cinder-api-0\" (UID: \"0896441a-c9db-4517-ae60-e0afa4cee74e\") " pod="openstack/cinder-api-0" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.399367 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0896441a-c9db-4517-ae60-e0afa4cee74e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0896441a-c9db-4517-ae60-e0afa4cee74e\") " pod="openstack/cinder-api-0" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.399402 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0896441a-c9db-4517-ae60-e0afa4cee74e-config-data\") pod \"cinder-api-0\" (UID: \"0896441a-c9db-4517-ae60-e0afa4cee74e\") " pod="openstack/cinder-api-0" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.399450 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d55kq\" (UniqueName: \"kubernetes.io/projected/0896441a-c9db-4517-ae60-e0afa4cee74e-kube-api-access-d55kq\") pod \"cinder-api-0\" (UID: \"0896441a-c9db-4517-ae60-e0afa4cee74e\") " pod="openstack/cinder-api-0" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.399717 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0896441a-c9db-4517-ae60-e0afa4cee74e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0896441a-c9db-4517-ae60-e0afa4cee74e\") " pod="openstack/cinder-api-0" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.400130 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0896441a-c9db-4517-ae60-e0afa4cee74e-logs\") pod \"cinder-api-0\" (UID: \"0896441a-c9db-4517-ae60-e0afa4cee74e\") " pod="openstack/cinder-api-0" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.409792 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0896441a-c9db-4517-ae60-e0afa4cee74e-config-data\") pod \"cinder-api-0\" (UID: \"0896441a-c9db-4517-ae60-e0afa4cee74e\") " pod="openstack/cinder-api-0" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.409843 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0896441a-c9db-4517-ae60-e0afa4cee74e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0896441a-c9db-4517-ae60-e0afa4cee74e\") " pod="openstack/cinder-api-0" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.416452 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0896441a-c9db-4517-ae60-e0afa4cee74e-scripts\") pod \"cinder-api-0\" (UID: \"0896441a-c9db-4517-ae60-e0afa4cee74e\") " pod="openstack/cinder-api-0" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.416930 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0896441a-c9db-4517-ae60-e0afa4cee74e-config-data-custom\") pod \"cinder-api-0\" (UID: \"0896441a-c9db-4517-ae60-e0afa4cee74e\") " pod="openstack/cinder-api-0" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.424347 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d55kq\" (UniqueName: \"kubernetes.io/projected/0896441a-c9db-4517-ae60-e0afa4cee74e-kube-api-access-d55kq\") pod \"cinder-api-0\" (UID: \"0896441a-c9db-4517-ae60-e0afa4cee74e\") " pod="openstack/cinder-api-0" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.468835 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8656dd4674-kcg9p" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.638579 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7fb85f479d-hgd4m"] Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.638867 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7fb85f479d-hgd4m" podUID="60b78f2d-a541-467f-88f5-daeffe5c9938" containerName="barbican-api-log" containerID="cri-o://0dcfbd283524599b48d4a2bbe3ec153b4bbca446445be7b850b4df511b0f4111" gracePeriod=30 Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.639229 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7fb85f479d-hgd4m" podUID="60b78f2d-a541-467f-88f5-daeffe5c9938" containerName="barbican-api" containerID="cri-o://363d2437e156c563d46e20b1821797034ecf1988ba03aab8e712e59199451a36" gracePeriod=30 Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.652720 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7fb85f479d-hgd4m" podUID="60b78f2d-a541-467f-88f5-daeffe5c9938" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.148:9311/healthcheck\": EOF" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.653197 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7fb85f479d-hgd4m" podUID="60b78f2d-a541-467f-88f5-daeffe5c9938" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.148:9311/healthcheck\": EOF" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.653785 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7fb85f479d-hgd4m" podUID="60b78f2d-a541-467f-88f5-daeffe5c9938" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.148:9311/healthcheck\": EOF" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.654038 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7fb85f479d-hgd4m" podUID="60b78f2d-a541-467f-88f5-daeffe5c9938" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.148:9311/healthcheck\": EOF" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.707844 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.757767 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="e6236001-96b0-4425-9f1f-eb84778d290a" containerName="galera" probeResult="failure" output="command timed out" Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.880975 4789 generic.go:334] "Generic (PLEG): container finished" podID="60b78f2d-a541-467f-88f5-daeffe5c9938" containerID="0dcfbd283524599b48d4a2bbe3ec153b4bbca446445be7b850b4df511b0f4111" exitCode=143 Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.881296 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7fb85f479d-hgd4m" event={"ID":"60b78f2d-a541-467f-88f5-daeffe5c9938","Type":"ContainerDied","Data":"0dcfbd283524599b48d4a2bbe3ec153b4bbca446445be7b850b4df511b0f4111"} Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.909608 4789 generic.go:334] "Generic (PLEG): container finished" podID="cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b" containerID="d2c5daf18048616e070af03b7ea9db79794daa5581f4511463b00e63b3f98633" exitCode=0 Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.909873 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-zr4gs" event={"ID":"cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b","Type":"ContainerDied","Data":"d2c5daf18048616e070af03b7ea9db79794daa5581f4511463b00e63b3f98633"} Nov 24 11:47:20 crc kubenswrapper[4789]: I1124 11:47:20.974875 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f66db59b9-zr4gs" Nov 24 11:47:21 crc kubenswrapper[4789]: I1124 11:47:21.044180 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b-dns-svc\") pod \"cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b\" (UID: \"cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b\") " Nov 24 11:47:21 crc kubenswrapper[4789]: I1124 11:47:21.044231 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b-config\") pod \"cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b\" (UID: \"cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b\") " Nov 24 11:47:21 crc kubenswrapper[4789]: I1124 11:47:21.044333 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b-ovsdbserver-nb\") pod \"cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b\" (UID: \"cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b\") " Nov 24 11:47:21 crc kubenswrapper[4789]: I1124 11:47:21.044354 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2f6n6\" (UniqueName: \"kubernetes.io/projected/cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b-kube-api-access-2f6n6\") pod \"cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b\" (UID: \"cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b\") " Nov 24 11:47:21 crc kubenswrapper[4789]: I1124 11:47:21.044388 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b-ovsdbserver-sb\") pod \"cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b\" (UID: \"cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b\") " Nov 24 11:47:21 crc kubenswrapper[4789]: I1124 11:47:21.067315 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b-kube-api-access-2f6n6" (OuterVolumeSpecName: "kube-api-access-2f6n6") pod "cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b" (UID: "cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b"). InnerVolumeSpecName "kube-api-access-2f6n6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:47:21 crc kubenswrapper[4789]: I1124 11:47:21.158054 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2f6n6\" (UniqueName: \"kubernetes.io/projected/cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b-kube-api-access-2f6n6\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:21 crc kubenswrapper[4789]: I1124 11:47:21.161360 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b-config" (OuterVolumeSpecName: "config") pod "cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b" (UID: "cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:47:21 crc kubenswrapper[4789]: I1124 11:47:21.185039 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 11:47:21 crc kubenswrapper[4789]: I1124 11:47:21.202569 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b" (UID: "cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:47:21 crc kubenswrapper[4789]: I1124 11:47:21.209182 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b" (UID: "cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:47:21 crc kubenswrapper[4789]: I1124 11:47:21.260677 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:21 crc kubenswrapper[4789]: I1124 11:47:21.260708 4789 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:21 crc kubenswrapper[4789]: I1124 11:47:21.260718 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:21 crc kubenswrapper[4789]: I1124 11:47:21.263072 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b" (UID: "cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:47:21 crc kubenswrapper[4789]: I1124 11:47:21.360361 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-n5hqj"] Nov 24 11:47:21 crc kubenswrapper[4789]: I1124 11:47:21.361912 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:21 crc kubenswrapper[4789]: I1124 11:47:21.477177 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 11:47:21 crc kubenswrapper[4789]: I1124 11:47:21.952385 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-zr4gs" event={"ID":"cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b","Type":"ContainerDied","Data":"b8ea22b515f29623ff7347962a9e62a301c6e7af012fb0b3996b5b84b244d725"} Nov 24 11:47:21 crc kubenswrapper[4789]: I1124 11:47:21.952429 4789 scope.go:117] "RemoveContainer" containerID="d2c5daf18048616e070af03b7ea9db79794daa5581f4511463b00e63b3f98633" Nov 24 11:47:21 crc kubenswrapper[4789]: I1124 11:47:21.952553 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f66db59b9-zr4gs" Nov 24 11:47:21 crc kubenswrapper[4789]: I1124 11:47:21.967930 4789 generic.go:334] "Generic (PLEG): container finished" podID="e6978127-8354-4009-af79-a96fc2e47c9f" containerID="a80d58db6eec7cd185fbbe5474cdc5b42663d7ab243387a1d81a8df8a784063b" exitCode=0 Nov 24 11:47:21 crc kubenswrapper[4789]: I1124 11:47:21.968329 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-n5hqj" event={"ID":"e6978127-8354-4009-af79-a96fc2e47c9f","Type":"ContainerDied","Data":"a80d58db6eec7cd185fbbe5474cdc5b42663d7ab243387a1d81a8df8a784063b"} Nov 24 11:47:21 crc kubenswrapper[4789]: I1124 11:47:21.968356 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-n5hqj" event={"ID":"e6978127-8354-4009-af79-a96fc2e47c9f","Type":"ContainerStarted","Data":"9327d548d70dc6667fc17207e61a5e2744425ac0d79a91c386f141dd3beadeb4"} Nov 24 11:47:21 crc kubenswrapper[4789]: I1124 11:47:21.976044 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"16f2b7dc-63ee-4cc6-8787-2b15971d30b5","Type":"ContainerStarted","Data":"2a5377b4a06d868c8ef013d098a6a7f32a039b02325dd5ecc570e286432c1296"} Nov 24 11:47:21 crc kubenswrapper[4789]: I1124 11:47:21.980001 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0896441a-c9db-4517-ae60-e0afa4cee74e","Type":"ContainerStarted","Data":"64283b1edbaba74d9344d2d371168f1278799683341492ec6eef87bb1601cc7d"} Nov 24 11:47:21 crc kubenswrapper[4789]: I1124 11:47:21.986797 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c87d408-bf3b-4156-9116-110b948e3ead","Type":"ContainerStarted","Data":"55f620a2f376b2ccee2fbb1879f120940a3645ab263192472ff8e39f5aef138f"} Nov 24 11:47:21 crc kubenswrapper[4789]: I1124 11:47:21.986960 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0c87d408-bf3b-4156-9116-110b948e3ead" containerName="ceilometer-central-agent" containerID="cri-o://eda763e5b5d63022d9cf290c856050412b0e91487174fd25f8c1b5bb1ee3dc10" gracePeriod=30 Nov 24 11:47:21 crc kubenswrapper[4789]: I1124 11:47:21.987198 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 11:47:21 crc kubenswrapper[4789]: I1124 11:47:21.987235 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0c87d408-bf3b-4156-9116-110b948e3ead" containerName="proxy-httpd" containerID="cri-o://55f620a2f376b2ccee2fbb1879f120940a3645ab263192472ff8e39f5aef138f" gracePeriod=30 Nov 24 11:47:21 crc kubenswrapper[4789]: I1124 11:47:21.987275 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0c87d408-bf3b-4156-9116-110b948e3ead" containerName="sg-core" containerID="cri-o://74891286f6737c133baab385e02e00f72c9d3c624539dd2d91a513bb98367053" gracePeriod=30 Nov 24 11:47:21 crc kubenswrapper[4789]: I1124 11:47:21.987306 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0c87d408-bf3b-4156-9116-110b948e3ead" containerName="ceilometer-notification-agent" containerID="cri-o://214bad3787b34574213e2dccf3e08dab06ed07d848e91b50f27319e37ebef65b" gracePeriod=30 Nov 24 11:47:22 crc kubenswrapper[4789]: I1124 11:47:22.010525 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-zr4gs"] Nov 24 11:47:22 crc kubenswrapper[4789]: I1124 11:47:22.019308 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-zr4gs"] Nov 24 11:47:22 crc kubenswrapper[4789]: I1124 11:47:22.032842 4789 scope.go:117] "RemoveContainer" containerID="27cac019669893f9ca4c054dfeca345edaaabba2669cefc3aa4658aef8be8c9f" Nov 24 11:47:22 crc kubenswrapper[4789]: I1124 11:47:22.060857 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.826842313 podStartE2EDuration="50.060840066s" podCreationTimestamp="2025-11-24 11:46:32 +0000 UTC" firstStartedPulling="2025-11-24 11:46:34.182128214 +0000 UTC m=+976.764599593" lastFinishedPulling="2025-11-24 11:47:20.416125967 +0000 UTC m=+1022.998597346" observedRunningTime="2025-11-24 11:47:22.022096098 +0000 UTC m=+1024.604567477" watchObservedRunningTime="2025-11-24 11:47:22.060840066 +0000 UTC m=+1024.643311445" Nov 24 11:47:22 crc kubenswrapper[4789]: I1124 11:47:22.211344 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b" path="/var/lib/kubelet/pods/cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b/volumes" Nov 24 11:47:22 crc kubenswrapper[4789]: I1124 11:47:22.937702 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 24 11:47:22 crc kubenswrapper[4789]: I1124 11:47:22.999026 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-n5hqj" event={"ID":"e6978127-8354-4009-af79-a96fc2e47c9f","Type":"ContainerStarted","Data":"731a08aa876ccc98278e8a05f4f029074189ef51eb1166cea20d8102a20bd199"} Nov 24 11:47:23 crc kubenswrapper[4789]: I1124 11:47:22.999185 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58db5546cc-n5hqj" Nov 24 11:47:23 crc kubenswrapper[4789]: I1124 11:47:23.002206 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0896441a-c9db-4517-ae60-e0afa4cee74e","Type":"ContainerStarted","Data":"5952190f1db4df3f399bb853cfb5c572f8f671f1fb96ed9693babbe863d1e21c"} Nov 24 11:47:23 crc kubenswrapper[4789]: I1124 11:47:23.009499 4789 generic.go:334] "Generic (PLEG): container finished" podID="0c87d408-bf3b-4156-9116-110b948e3ead" containerID="55f620a2f376b2ccee2fbb1879f120940a3645ab263192472ff8e39f5aef138f" exitCode=0 Nov 24 11:47:23 crc kubenswrapper[4789]: I1124 11:47:23.009531 4789 generic.go:334] "Generic (PLEG): container finished" podID="0c87d408-bf3b-4156-9116-110b948e3ead" containerID="74891286f6737c133baab385e02e00f72c9d3c624539dd2d91a513bb98367053" exitCode=2 Nov 24 11:47:23 crc kubenswrapper[4789]: I1124 11:47:23.009542 4789 generic.go:334] "Generic (PLEG): container finished" podID="0c87d408-bf3b-4156-9116-110b948e3ead" containerID="eda763e5b5d63022d9cf290c856050412b0e91487174fd25f8c1b5bb1ee3dc10" exitCode=0 Nov 24 11:47:23 crc kubenswrapper[4789]: I1124 11:47:23.009538 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c87d408-bf3b-4156-9116-110b948e3ead","Type":"ContainerDied","Data":"55f620a2f376b2ccee2fbb1879f120940a3645ab263192472ff8e39f5aef138f"} Nov 24 11:47:23 crc kubenswrapper[4789]: I1124 11:47:23.009589 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c87d408-bf3b-4156-9116-110b948e3ead","Type":"ContainerDied","Data":"74891286f6737c133baab385e02e00f72c9d3c624539dd2d91a513bb98367053"} Nov 24 11:47:23 crc kubenswrapper[4789]: I1124 11:47:23.009601 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c87d408-bf3b-4156-9116-110b948e3ead","Type":"ContainerDied","Data":"eda763e5b5d63022d9cf290c856050412b0e91487174fd25f8c1b5bb1ee3dc10"} Nov 24 11:47:23 crc kubenswrapper[4789]: I1124 11:47:23.825776 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7fb85f479d-hgd4m" Nov 24 11:47:23 crc kubenswrapper[4789]: I1124 11:47:23.846095 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58db5546cc-n5hqj" podStartSLOduration=4.8460807280000004 podStartE2EDuration="4.846080728s" podCreationTimestamp="2025-11-24 11:47:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:47:23.024900907 +0000 UTC m=+1025.607372296" watchObservedRunningTime="2025-11-24 11:47:23.846080728 +0000 UTC m=+1026.428552107" Nov 24 11:47:24 crc kubenswrapper[4789]: I1124 11:47:24.061055 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"16f2b7dc-63ee-4cc6-8787-2b15971d30b5","Type":"ContainerStarted","Data":"93e6e2db83a3cdd62df67611bb4b98cdec8e0c4fc1f5edc03d02485e4f308983"} Nov 24 11:47:24 crc kubenswrapper[4789]: I1124 11:47:24.081603 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0896441a-c9db-4517-ae60-e0afa4cee74e","Type":"ContainerStarted","Data":"dfb9ceac80af2c7120075fa098eca7f07ce155210cccf3d12c4a88c193a92986"} Nov 24 11:47:24 crc kubenswrapper[4789]: I1124 11:47:24.081653 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="0896441a-c9db-4517-ae60-e0afa4cee74e" containerName="cinder-api-log" containerID="cri-o://5952190f1db4df3f399bb853cfb5c572f8f671f1fb96ed9693babbe863d1e21c" gracePeriod=30 Nov 24 11:47:24 crc kubenswrapper[4789]: I1124 11:47:24.081738 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 24 11:47:24 crc kubenswrapper[4789]: I1124 11:47:24.081797 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="0896441a-c9db-4517-ae60-e0afa4cee74e" containerName="cinder-api" containerID="cri-o://dfb9ceac80af2c7120075fa098eca7f07ce155210cccf3d12c4a88c193a92986" gracePeriod=30 Nov 24 11:47:24 crc kubenswrapper[4789]: I1124 11:47:24.102655 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.10263734 podStartE2EDuration="4.10263734s" podCreationTimestamp="2025-11-24 11:47:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:47:24.100823095 +0000 UTC m=+1026.683294474" watchObservedRunningTime="2025-11-24 11:47:24.10263734 +0000 UTC m=+1026.685108719" Nov 24 11:47:24 crc kubenswrapper[4789]: I1124 11:47:24.685445 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:47:24 crc kubenswrapper[4789]: I1124 11:47:24.696349 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7fb85f479d-hgd4m" Nov 24 11:47:24 crc kubenswrapper[4789]: I1124 11:47:24.747419 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c87d408-bf3b-4156-9116-110b948e3ead-log-httpd\") pod \"0c87d408-bf3b-4156-9116-110b948e3ead\" (UID: \"0c87d408-bf3b-4156-9116-110b948e3ead\") " Nov 24 11:47:24 crc kubenswrapper[4789]: I1124 11:47:24.747499 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c87d408-bf3b-4156-9116-110b948e3ead-combined-ca-bundle\") pod \"0c87d408-bf3b-4156-9116-110b948e3ead\" (UID: \"0c87d408-bf3b-4156-9116-110b948e3ead\") " Nov 24 11:47:24 crc kubenswrapper[4789]: I1124 11:47:24.747516 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c87d408-bf3b-4156-9116-110b948e3ead-sg-core-conf-yaml\") pod \"0c87d408-bf3b-4156-9116-110b948e3ead\" (UID: \"0c87d408-bf3b-4156-9116-110b948e3ead\") " Nov 24 11:47:24 crc kubenswrapper[4789]: I1124 11:47:24.747542 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c87d408-bf3b-4156-9116-110b948e3ead-scripts\") pod \"0c87d408-bf3b-4156-9116-110b948e3ead\" (UID: \"0c87d408-bf3b-4156-9116-110b948e3ead\") " Nov 24 11:47:24 crc kubenswrapper[4789]: I1124 11:47:24.747557 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c87d408-bf3b-4156-9116-110b948e3ead-config-data\") pod \"0c87d408-bf3b-4156-9116-110b948e3ead\" (UID: \"0c87d408-bf3b-4156-9116-110b948e3ead\") " Nov 24 11:47:24 crc kubenswrapper[4789]: I1124 11:47:24.747627 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlpqf\" (UniqueName: \"kubernetes.io/projected/0c87d408-bf3b-4156-9116-110b948e3ead-kube-api-access-jlpqf\") pod \"0c87d408-bf3b-4156-9116-110b948e3ead\" (UID: \"0c87d408-bf3b-4156-9116-110b948e3ead\") " Nov 24 11:47:24 crc kubenswrapper[4789]: I1124 11:47:24.747783 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c87d408-bf3b-4156-9116-110b948e3ead-run-httpd\") pod \"0c87d408-bf3b-4156-9116-110b948e3ead\" (UID: \"0c87d408-bf3b-4156-9116-110b948e3ead\") " Nov 24 11:47:24 crc kubenswrapper[4789]: I1124 11:47:24.748533 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c87d408-bf3b-4156-9116-110b948e3ead-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0c87d408-bf3b-4156-9116-110b948e3ead" (UID: "0c87d408-bf3b-4156-9116-110b948e3ead"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:47:24 crc kubenswrapper[4789]: I1124 11:47:24.748778 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c87d408-bf3b-4156-9116-110b948e3ead-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0c87d408-bf3b-4156-9116-110b948e3ead" (UID: "0c87d408-bf3b-4156-9116-110b948e3ead"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:47:24 crc kubenswrapper[4789]: I1124 11:47:24.780995 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c87d408-bf3b-4156-9116-110b948e3ead-scripts" (OuterVolumeSpecName: "scripts") pod "0c87d408-bf3b-4156-9116-110b948e3ead" (UID: "0c87d408-bf3b-4156-9116-110b948e3ead"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:24 crc kubenswrapper[4789]: I1124 11:47:24.784479 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c87d408-bf3b-4156-9116-110b948e3ead-kube-api-access-jlpqf" (OuterVolumeSpecName: "kube-api-access-jlpqf") pod "0c87d408-bf3b-4156-9116-110b948e3ead" (UID: "0c87d408-bf3b-4156-9116-110b948e3ead"). InnerVolumeSpecName "kube-api-access-jlpqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:47:24 crc kubenswrapper[4789]: I1124 11:47:24.809587 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c87d408-bf3b-4156-9116-110b948e3ead-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0c87d408-bf3b-4156-9116-110b948e3ead" (UID: "0c87d408-bf3b-4156-9116-110b948e3ead"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:24 crc kubenswrapper[4789]: I1124 11:47:24.849640 4789 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c87d408-bf3b-4156-9116-110b948e3ead-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:24 crc kubenswrapper[4789]: I1124 11:47:24.849695 4789 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c87d408-bf3b-4156-9116-110b948e3ead-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:24 crc kubenswrapper[4789]: I1124 11:47:24.849708 4789 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c87d408-bf3b-4156-9116-110b948e3ead-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:24 crc kubenswrapper[4789]: I1124 11:47:24.849720 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c87d408-bf3b-4156-9116-110b948e3ead-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:24 crc kubenswrapper[4789]: I1124 11:47:24.849731 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlpqf\" (UniqueName: \"kubernetes.io/projected/0c87d408-bf3b-4156-9116-110b948e3ead-kube-api-access-jlpqf\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:24 crc kubenswrapper[4789]: I1124 11:47:24.856513 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c87d408-bf3b-4156-9116-110b948e3ead-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c87d408-bf3b-4156-9116-110b948e3ead" (UID: "0c87d408-bf3b-4156-9116-110b948e3ead"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:24 crc kubenswrapper[4789]: I1124 11:47:24.874332 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c87d408-bf3b-4156-9116-110b948e3ead-config-data" (OuterVolumeSpecName: "config-data") pod "0c87d408-bf3b-4156-9116-110b948e3ead" (UID: "0c87d408-bf3b-4156-9116-110b948e3ead"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:24 crc kubenswrapper[4789]: I1124 11:47:24.950696 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c87d408-bf3b-4156-9116-110b948e3ead-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:24 crc kubenswrapper[4789]: I1124 11:47:24.950728 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c87d408-bf3b-4156-9116-110b948e3ead-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.052477 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-f8c9d6bfb-grt9w" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.091126 4789 generic.go:334] "Generic (PLEG): container finished" podID="0896441a-c9db-4517-ae60-e0afa4cee74e" containerID="5952190f1db4df3f399bb853cfb5c572f8f671f1fb96ed9693babbe863d1e21c" exitCode=143 Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.091204 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0896441a-c9db-4517-ae60-e0afa4cee74e","Type":"ContainerDied","Data":"5952190f1db4df3f399bb853cfb5c572f8f671f1fb96ed9693babbe863d1e21c"} Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.095482 4789 generic.go:334] "Generic (PLEG): container finished" podID="0c87d408-bf3b-4156-9116-110b948e3ead" containerID="214bad3787b34574213e2dccf3e08dab06ed07d848e91b50f27319e37ebef65b" exitCode=0 Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.095564 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.095557 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c87d408-bf3b-4156-9116-110b948e3ead","Type":"ContainerDied","Data":"214bad3787b34574213e2dccf3e08dab06ed07d848e91b50f27319e37ebef65b"} Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.095910 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c87d408-bf3b-4156-9116-110b948e3ead","Type":"ContainerDied","Data":"4cad5290bcab57fa34e85cfd3463e4975002f4a93e7cd150076daa9de74f9295"} Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.095937 4789 scope.go:117] "RemoveContainer" containerID="55f620a2f376b2ccee2fbb1879f120940a3645ab263192472ff8e39f5aef138f" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.097557 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"16f2b7dc-63ee-4cc6-8787-2b15971d30b5","Type":"ContainerStarted","Data":"ae8dc0e35916dead3edd76523805911eaef39f1949c29ec17bd76f7c1834e3b6"} Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.144324 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.815015427 podStartE2EDuration="6.144303009s" podCreationTimestamp="2025-11-24 11:47:19 +0000 UTC" firstStartedPulling="2025-11-24 11:47:21.16260978 +0000 UTC m=+1023.745081159" lastFinishedPulling="2025-11-24 11:47:22.491897362 +0000 UTC m=+1025.074368741" observedRunningTime="2025-11-24 11:47:25.142724951 +0000 UTC m=+1027.725196330" watchObservedRunningTime="2025-11-24 11:47:25.144303009 +0000 UTC m=+1027.726774388" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.176653 4789 scope.go:117] "RemoveContainer" containerID="74891286f6737c133baab385e02e00f72c9d3c624539dd2d91a513bb98367053" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.215561 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.231141 4789 scope.go:117] "RemoveContainer" containerID="214bad3787b34574213e2dccf3e08dab06ed07d848e91b50f27319e37ebef65b" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.302966 4789 scope.go:117] "RemoveContainer" containerID="eda763e5b5d63022d9cf290c856050412b0e91487174fd25f8c1b5bb1ee3dc10" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.329500 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.336058 4789 scope.go:117] "RemoveContainer" containerID="55f620a2f376b2ccee2fbb1879f120940a3645ab263192472ff8e39f5aef138f" Nov 24 11:47:25 crc kubenswrapper[4789]: E1124 11:47:25.337212 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55f620a2f376b2ccee2fbb1879f120940a3645ab263192472ff8e39f5aef138f\": container with ID starting with 55f620a2f376b2ccee2fbb1879f120940a3645ab263192472ff8e39f5aef138f not found: ID does not exist" containerID="55f620a2f376b2ccee2fbb1879f120940a3645ab263192472ff8e39f5aef138f" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.337262 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55f620a2f376b2ccee2fbb1879f120940a3645ab263192472ff8e39f5aef138f"} err="failed to get container status \"55f620a2f376b2ccee2fbb1879f120940a3645ab263192472ff8e39f5aef138f\": rpc error: code = NotFound desc = could not find container \"55f620a2f376b2ccee2fbb1879f120940a3645ab263192472ff8e39f5aef138f\": container with ID starting with 55f620a2f376b2ccee2fbb1879f120940a3645ab263192472ff8e39f5aef138f not found: ID does not exist" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.337306 4789 scope.go:117] "RemoveContainer" containerID="74891286f6737c133baab385e02e00f72c9d3c624539dd2d91a513bb98367053" Nov 24 11:47:25 crc kubenswrapper[4789]: E1124 11:47:25.337823 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74891286f6737c133baab385e02e00f72c9d3c624539dd2d91a513bb98367053\": container with ID starting with 74891286f6737c133baab385e02e00f72c9d3c624539dd2d91a513bb98367053 not found: ID does not exist" containerID="74891286f6737c133baab385e02e00f72c9d3c624539dd2d91a513bb98367053" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.337859 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74891286f6737c133baab385e02e00f72c9d3c624539dd2d91a513bb98367053"} err="failed to get container status \"74891286f6737c133baab385e02e00f72c9d3c624539dd2d91a513bb98367053\": rpc error: code = NotFound desc = could not find container \"74891286f6737c133baab385e02e00f72c9d3c624539dd2d91a513bb98367053\": container with ID starting with 74891286f6737c133baab385e02e00f72c9d3c624539dd2d91a513bb98367053 not found: ID does not exist" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.337877 4789 scope.go:117] "RemoveContainer" containerID="214bad3787b34574213e2dccf3e08dab06ed07d848e91b50f27319e37ebef65b" Nov 24 11:47:25 crc kubenswrapper[4789]: E1124 11:47:25.338276 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"214bad3787b34574213e2dccf3e08dab06ed07d848e91b50f27319e37ebef65b\": container with ID starting with 214bad3787b34574213e2dccf3e08dab06ed07d848e91b50f27319e37ebef65b not found: ID does not exist" containerID="214bad3787b34574213e2dccf3e08dab06ed07d848e91b50f27319e37ebef65b" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.338332 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"214bad3787b34574213e2dccf3e08dab06ed07d848e91b50f27319e37ebef65b"} err="failed to get container status \"214bad3787b34574213e2dccf3e08dab06ed07d848e91b50f27319e37ebef65b\": rpc error: code = NotFound desc = could not find container \"214bad3787b34574213e2dccf3e08dab06ed07d848e91b50f27319e37ebef65b\": container with ID starting with 214bad3787b34574213e2dccf3e08dab06ed07d848e91b50f27319e37ebef65b not found: ID does not exist" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.338361 4789 scope.go:117] "RemoveContainer" containerID="eda763e5b5d63022d9cf290c856050412b0e91487174fd25f8c1b5bb1ee3dc10" Nov 24 11:47:25 crc kubenswrapper[4789]: E1124 11:47:25.361648 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eda763e5b5d63022d9cf290c856050412b0e91487174fd25f8c1b5bb1ee3dc10\": container with ID starting with eda763e5b5d63022d9cf290c856050412b0e91487174fd25f8c1b5bb1ee3dc10 not found: ID does not exist" containerID="eda763e5b5d63022d9cf290c856050412b0e91487174fd25f8c1b5bb1ee3dc10" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.362595 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eda763e5b5d63022d9cf290c856050412b0e91487174fd25f8c1b5bb1ee3dc10"} err="failed to get container status \"eda763e5b5d63022d9cf290c856050412b0e91487174fd25f8c1b5bb1ee3dc10\": rpc error: code = NotFound desc = could not find container \"eda763e5b5d63022d9cf290c856050412b0e91487174fd25f8c1b5bb1ee3dc10\": container with ID starting with eda763e5b5d63022d9cf290c856050412b0e91487174fd25f8c1b5bb1ee3dc10 not found: ID does not exist" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.378541 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.378610 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:47:25 crc kubenswrapper[4789]: E1124 11:47:25.379011 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c87d408-bf3b-4156-9116-110b948e3ead" containerName="sg-core" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.379026 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c87d408-bf3b-4156-9116-110b948e3ead" containerName="sg-core" Nov 24 11:47:25 crc kubenswrapper[4789]: E1124 11:47:25.379048 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c87d408-bf3b-4156-9116-110b948e3ead" containerName="ceilometer-central-agent" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.379054 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c87d408-bf3b-4156-9116-110b948e3ead" containerName="ceilometer-central-agent" Nov 24 11:47:25 crc kubenswrapper[4789]: E1124 11:47:25.379064 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c87d408-bf3b-4156-9116-110b948e3ead" containerName="ceilometer-notification-agent" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.379070 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c87d408-bf3b-4156-9116-110b948e3ead" containerName="ceilometer-notification-agent" Nov 24 11:47:25 crc kubenswrapper[4789]: E1124 11:47:25.379085 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c87d408-bf3b-4156-9116-110b948e3ead" containerName="proxy-httpd" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.379091 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c87d408-bf3b-4156-9116-110b948e3ead" containerName="proxy-httpd" Nov 24 11:47:25 crc kubenswrapper[4789]: E1124 11:47:25.379100 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b" containerName="dnsmasq-dns" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.379105 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b" containerName="dnsmasq-dns" Nov 24 11:47:25 crc kubenswrapper[4789]: E1124 11:47:25.379115 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b" containerName="init" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.379130 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b" containerName="init" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.379282 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c87d408-bf3b-4156-9116-110b948e3ead" containerName="proxy-httpd" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.379292 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf6a1ec5-8f3b-48ef-ba4a-ea43df54993b" containerName="dnsmasq-dns" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.379301 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c87d408-bf3b-4156-9116-110b948e3ead" containerName="sg-core" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.379314 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c87d408-bf3b-4156-9116-110b948e3ead" containerName="ceilometer-notification-agent" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.379326 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c87d408-bf3b-4156-9116-110b948e3ead" containerName="ceilometer-central-agent" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.380926 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.385007 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.402044 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.402235 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.564341 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7b56404-36d7-44f3-92c3-5835ea030fb1-config-data\") pod \"ceilometer-0\" (UID: \"d7b56404-36d7-44f3-92c3-5835ea030fb1\") " pod="openstack/ceilometer-0" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.565542 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7b56404-36d7-44f3-92c3-5835ea030fb1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d7b56404-36d7-44f3-92c3-5835ea030fb1\") " pod="openstack/ceilometer-0" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.565707 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7b56404-36d7-44f3-92c3-5835ea030fb1-scripts\") pod \"ceilometer-0\" (UID: \"d7b56404-36d7-44f3-92c3-5835ea030fb1\") " pod="openstack/ceilometer-0" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.565799 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7b56404-36d7-44f3-92c3-5835ea030fb1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d7b56404-36d7-44f3-92c3-5835ea030fb1\") " pod="openstack/ceilometer-0" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.565896 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7b56404-36d7-44f3-92c3-5835ea030fb1-run-httpd\") pod \"ceilometer-0\" (UID: \"d7b56404-36d7-44f3-92c3-5835ea030fb1\") " pod="openstack/ceilometer-0" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.566017 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc8rs\" (UniqueName: \"kubernetes.io/projected/d7b56404-36d7-44f3-92c3-5835ea030fb1-kube-api-access-cc8rs\") pod \"ceilometer-0\" (UID: \"d7b56404-36d7-44f3-92c3-5835ea030fb1\") " pod="openstack/ceilometer-0" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.566111 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7b56404-36d7-44f3-92c3-5835ea030fb1-log-httpd\") pod \"ceilometer-0\" (UID: \"d7b56404-36d7-44f3-92c3-5835ea030fb1\") " pod="openstack/ceilometer-0" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.667362 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7b56404-36d7-44f3-92c3-5835ea030fb1-scripts\") pod \"ceilometer-0\" (UID: \"d7b56404-36d7-44f3-92c3-5835ea030fb1\") " pod="openstack/ceilometer-0" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.667405 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7b56404-36d7-44f3-92c3-5835ea030fb1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d7b56404-36d7-44f3-92c3-5835ea030fb1\") " pod="openstack/ceilometer-0" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.667429 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7b56404-36d7-44f3-92c3-5835ea030fb1-run-httpd\") pod \"ceilometer-0\" (UID: \"d7b56404-36d7-44f3-92c3-5835ea030fb1\") " pod="openstack/ceilometer-0" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.667479 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc8rs\" (UniqueName: \"kubernetes.io/projected/d7b56404-36d7-44f3-92c3-5835ea030fb1-kube-api-access-cc8rs\") pod \"ceilometer-0\" (UID: \"d7b56404-36d7-44f3-92c3-5835ea030fb1\") " pod="openstack/ceilometer-0" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.667504 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7b56404-36d7-44f3-92c3-5835ea030fb1-log-httpd\") pod \"ceilometer-0\" (UID: \"d7b56404-36d7-44f3-92c3-5835ea030fb1\") " pod="openstack/ceilometer-0" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.667528 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7b56404-36d7-44f3-92c3-5835ea030fb1-config-data\") pod \"ceilometer-0\" (UID: \"d7b56404-36d7-44f3-92c3-5835ea030fb1\") " pod="openstack/ceilometer-0" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.667567 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7b56404-36d7-44f3-92c3-5835ea030fb1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d7b56404-36d7-44f3-92c3-5835ea030fb1\") " pod="openstack/ceilometer-0" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.668498 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7b56404-36d7-44f3-92c3-5835ea030fb1-run-httpd\") pod \"ceilometer-0\" (UID: \"d7b56404-36d7-44f3-92c3-5835ea030fb1\") " pod="openstack/ceilometer-0" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.668853 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7b56404-36d7-44f3-92c3-5835ea030fb1-log-httpd\") pod \"ceilometer-0\" (UID: \"d7b56404-36d7-44f3-92c3-5835ea030fb1\") " pod="openstack/ceilometer-0" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.673627 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7b56404-36d7-44f3-92c3-5835ea030fb1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d7b56404-36d7-44f3-92c3-5835ea030fb1\") " pod="openstack/ceilometer-0" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.675407 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7b56404-36d7-44f3-92c3-5835ea030fb1-config-data\") pod \"ceilometer-0\" (UID: \"d7b56404-36d7-44f3-92c3-5835ea030fb1\") " pod="openstack/ceilometer-0" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.681328 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7b56404-36d7-44f3-92c3-5835ea030fb1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d7b56404-36d7-44f3-92c3-5835ea030fb1\") " pod="openstack/ceilometer-0" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.683330 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7b56404-36d7-44f3-92c3-5835ea030fb1-scripts\") pod \"ceilometer-0\" (UID: \"d7b56404-36d7-44f3-92c3-5835ea030fb1\") " pod="openstack/ceilometer-0" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.687783 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc8rs\" (UniqueName: \"kubernetes.io/projected/d7b56404-36d7-44f3-92c3-5835ea030fb1-kube-api-access-cc8rs\") pod \"ceilometer-0\" (UID: \"d7b56404-36d7-44f3-92c3-5835ea030fb1\") " pod="openstack/ceilometer-0" Nov 24 11:47:25 crc kubenswrapper[4789]: I1124 11:47:25.706688 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:47:26 crc kubenswrapper[4789]: I1124 11:47:26.179787 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c87d408-bf3b-4156-9116-110b948e3ead" path="/var/lib/kubelet/pods/0c87d408-bf3b-4156-9116-110b948e3ead/volumes" Nov 24 11:47:26 crc kubenswrapper[4789]: I1124 11:47:26.242644 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:47:26 crc kubenswrapper[4789]: W1124 11:47:26.249519 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7b56404_36d7_44f3_92c3_5835ea030fb1.slice/crio-8f585c955d978cdc54140bd5f88a77a83f6555b0646ca71f861c2b5e17fdc4bb WatchSource:0}: Error finding container 8f585c955d978cdc54140bd5f88a77a83f6555b0646ca71f861c2b5e17fdc4bb: Status 404 returned error can't find the container with id 8f585c955d978cdc54140bd5f88a77a83f6555b0646ca71f861c2b5e17fdc4bb Nov 24 11:47:27 crc kubenswrapper[4789]: I1124 11:47:27.117408 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7b56404-36d7-44f3-92c3-5835ea030fb1","Type":"ContainerStarted","Data":"accfd5d710fec79aeeaf67c9f1d81aa8aaa6cb97c42e2f0ca08d11869b430790"} Nov 24 11:47:27 crc kubenswrapper[4789]: I1124 11:47:27.117774 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7b56404-36d7-44f3-92c3-5835ea030fb1","Type":"ContainerStarted","Data":"8f585c955d978cdc54140bd5f88a77a83f6555b0646ca71f861c2b5e17fdc4bb"} Nov 24 11:47:27 crc kubenswrapper[4789]: I1124 11:47:27.647379 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-85c5468469-htqfg" Nov 24 11:47:27 crc kubenswrapper[4789]: I1124 11:47:27.711229 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-f8c9d6bfb-grt9w"] Nov 24 11:47:27 crc kubenswrapper[4789]: I1124 11:47:27.711959 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-f8c9d6bfb-grt9w" podUID="a0a5ba08-77d3-4c41-b6b0-5efd19c469fe" containerName="neutron-api" containerID="cri-o://bb32f3b4ddf429d5fba90a3afcd08a133d98739592534e9d48f50034b0bfa71a" gracePeriod=30 Nov 24 11:47:27 crc kubenswrapper[4789]: I1124 11:47:27.712484 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-f8c9d6bfb-grt9w" podUID="a0a5ba08-77d3-4c41-b6b0-5efd19c469fe" containerName="neutron-httpd" containerID="cri-o://98fa3a21acad22e8c4c3803a0becf30981b25659b3876be43f2dad4ce79d615d" gracePeriod=30 Nov 24 11:47:28 crc kubenswrapper[4789]: I1124 11:47:28.104609 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7fb85f479d-hgd4m" podUID="60b78f2d-a541-467f-88f5-daeffe5c9938" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.148:9311/healthcheck\": read tcp 10.217.0.2:46890->10.217.0.148:9311: read: connection reset by peer" Nov 24 11:47:28 crc kubenswrapper[4789]: I1124 11:47:28.104835 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7fb85f479d-hgd4m" podUID="60b78f2d-a541-467f-88f5-daeffe5c9938" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.148:9311/healthcheck\": read tcp 10.217.0.2:46904->10.217.0.148:9311: read: connection reset by peer" Nov 24 11:47:28 crc kubenswrapper[4789]: I1124 11:47:28.148028 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7b56404-36d7-44f3-92c3-5835ea030fb1","Type":"ContainerStarted","Data":"7c1df969ee8d865b91d4b105b09acaf1554e3f93c744d47d4dd01d461f842b5c"} Nov 24 11:47:28 crc kubenswrapper[4789]: I1124 11:47:28.170695 4789 generic.go:334] "Generic (PLEG): container finished" podID="a0a5ba08-77d3-4c41-b6b0-5efd19c469fe" containerID="98fa3a21acad22e8c4c3803a0becf30981b25659b3876be43f2dad4ce79d615d" exitCode=0 Nov 24 11:47:28 crc kubenswrapper[4789]: I1124 11:47:28.201582 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f8c9d6bfb-grt9w" event={"ID":"a0a5ba08-77d3-4c41-b6b0-5efd19c469fe","Type":"ContainerDied","Data":"98fa3a21acad22e8c4c3803a0becf30981b25659b3876be43f2dad4ce79d615d"} Nov 24 11:47:28 crc kubenswrapper[4789]: I1124 11:47:28.737643 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7fb85f479d-hgd4m" Nov 24 11:47:28 crc kubenswrapper[4789]: I1124 11:47:28.833989 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60b78f2d-a541-467f-88f5-daeffe5c9938-config-data\") pod \"60b78f2d-a541-467f-88f5-daeffe5c9938\" (UID: \"60b78f2d-a541-467f-88f5-daeffe5c9938\") " Nov 24 11:47:28 crc kubenswrapper[4789]: I1124 11:47:28.834080 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jvcf\" (UniqueName: \"kubernetes.io/projected/60b78f2d-a541-467f-88f5-daeffe5c9938-kube-api-access-9jvcf\") pod \"60b78f2d-a541-467f-88f5-daeffe5c9938\" (UID: \"60b78f2d-a541-467f-88f5-daeffe5c9938\") " Nov 24 11:47:28 crc kubenswrapper[4789]: I1124 11:47:28.834246 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60b78f2d-a541-467f-88f5-daeffe5c9938-combined-ca-bundle\") pod \"60b78f2d-a541-467f-88f5-daeffe5c9938\" (UID: \"60b78f2d-a541-467f-88f5-daeffe5c9938\") " Nov 24 11:47:28 crc kubenswrapper[4789]: I1124 11:47:28.834333 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/60b78f2d-a541-467f-88f5-daeffe5c9938-config-data-custom\") pod \"60b78f2d-a541-467f-88f5-daeffe5c9938\" (UID: \"60b78f2d-a541-467f-88f5-daeffe5c9938\") " Nov 24 11:47:28 crc kubenswrapper[4789]: I1124 11:47:28.834389 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60b78f2d-a541-467f-88f5-daeffe5c9938-logs\") pod \"60b78f2d-a541-467f-88f5-daeffe5c9938\" (UID: \"60b78f2d-a541-467f-88f5-daeffe5c9938\") " Nov 24 11:47:28 crc kubenswrapper[4789]: I1124 11:47:28.835139 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60b78f2d-a541-467f-88f5-daeffe5c9938-logs" (OuterVolumeSpecName: "logs") pod "60b78f2d-a541-467f-88f5-daeffe5c9938" (UID: "60b78f2d-a541-467f-88f5-daeffe5c9938"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:47:28 crc kubenswrapper[4789]: I1124 11:47:28.856116 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60b78f2d-a541-467f-88f5-daeffe5c9938-kube-api-access-9jvcf" (OuterVolumeSpecName: "kube-api-access-9jvcf") pod "60b78f2d-a541-467f-88f5-daeffe5c9938" (UID: "60b78f2d-a541-467f-88f5-daeffe5c9938"). InnerVolumeSpecName "kube-api-access-9jvcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:47:28 crc kubenswrapper[4789]: I1124 11:47:28.861635 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60b78f2d-a541-467f-88f5-daeffe5c9938-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "60b78f2d-a541-467f-88f5-daeffe5c9938" (UID: "60b78f2d-a541-467f-88f5-daeffe5c9938"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:28 crc kubenswrapper[4789]: I1124 11:47:28.887570 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60b78f2d-a541-467f-88f5-daeffe5c9938-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60b78f2d-a541-467f-88f5-daeffe5c9938" (UID: "60b78f2d-a541-467f-88f5-daeffe5c9938"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:28 crc kubenswrapper[4789]: I1124 11:47:28.901386 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60b78f2d-a541-467f-88f5-daeffe5c9938-config-data" (OuterVolumeSpecName: "config-data") pod "60b78f2d-a541-467f-88f5-daeffe5c9938" (UID: "60b78f2d-a541-467f-88f5-daeffe5c9938"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:28 crc kubenswrapper[4789]: I1124 11:47:28.937192 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60b78f2d-a541-467f-88f5-daeffe5c9938-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:28 crc kubenswrapper[4789]: I1124 11:47:28.937220 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jvcf\" (UniqueName: \"kubernetes.io/projected/60b78f2d-a541-467f-88f5-daeffe5c9938-kube-api-access-9jvcf\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:28 crc kubenswrapper[4789]: I1124 11:47:28.937233 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60b78f2d-a541-467f-88f5-daeffe5c9938-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:28 crc kubenswrapper[4789]: I1124 11:47:28.937243 4789 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/60b78f2d-a541-467f-88f5-daeffe5c9938-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:28 crc kubenswrapper[4789]: I1124 11:47:28.937251 4789 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60b78f2d-a541-467f-88f5-daeffe5c9938-logs\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:29 crc kubenswrapper[4789]: I1124 11:47:29.186622 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7b56404-36d7-44f3-92c3-5835ea030fb1","Type":"ContainerStarted","Data":"80d80b26b6fa32832ca8f39975f78aeed394a370b1c2fdd0aa7cf72a244a01c6"} Nov 24 11:47:29 crc kubenswrapper[4789]: I1124 11:47:29.188737 4789 generic.go:334] "Generic (PLEG): container finished" podID="60b78f2d-a541-467f-88f5-daeffe5c9938" containerID="363d2437e156c563d46e20b1821797034ecf1988ba03aab8e712e59199451a36" exitCode=0 Nov 24 11:47:29 crc kubenswrapper[4789]: I1124 11:47:29.188765 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7fb85f479d-hgd4m" event={"ID":"60b78f2d-a541-467f-88f5-daeffe5c9938","Type":"ContainerDied","Data":"363d2437e156c563d46e20b1821797034ecf1988ba03aab8e712e59199451a36"} Nov 24 11:47:29 crc kubenswrapper[4789]: I1124 11:47:29.188781 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7fb85f479d-hgd4m" event={"ID":"60b78f2d-a541-467f-88f5-daeffe5c9938","Type":"ContainerDied","Data":"d5d0127585b3c250417173d076e91f2bf5b4f395073a269f3033795b2b4c0587"} Nov 24 11:47:29 crc kubenswrapper[4789]: I1124 11:47:29.188799 4789 scope.go:117] "RemoveContainer" containerID="363d2437e156c563d46e20b1821797034ecf1988ba03aab8e712e59199451a36" Nov 24 11:47:29 crc kubenswrapper[4789]: I1124 11:47:29.188909 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7fb85f479d-hgd4m" Nov 24 11:47:29 crc kubenswrapper[4789]: I1124 11:47:29.228589 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7fb85f479d-hgd4m"] Nov 24 11:47:29 crc kubenswrapper[4789]: I1124 11:47:29.236775 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7fb85f479d-hgd4m"] Nov 24 11:47:29 crc kubenswrapper[4789]: I1124 11:47:29.239068 4789 scope.go:117] "RemoveContainer" containerID="0dcfbd283524599b48d4a2bbe3ec153b4bbca446445be7b850b4df511b0f4111" Nov 24 11:47:29 crc kubenswrapper[4789]: I1124 11:47:29.265939 4789 scope.go:117] "RemoveContainer" containerID="363d2437e156c563d46e20b1821797034ecf1988ba03aab8e712e59199451a36" Nov 24 11:47:29 crc kubenswrapper[4789]: E1124 11:47:29.266268 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"363d2437e156c563d46e20b1821797034ecf1988ba03aab8e712e59199451a36\": container with ID starting with 363d2437e156c563d46e20b1821797034ecf1988ba03aab8e712e59199451a36 not found: ID does not exist" containerID="363d2437e156c563d46e20b1821797034ecf1988ba03aab8e712e59199451a36" Nov 24 11:47:29 crc kubenswrapper[4789]: I1124 11:47:29.266296 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"363d2437e156c563d46e20b1821797034ecf1988ba03aab8e712e59199451a36"} err="failed to get container status \"363d2437e156c563d46e20b1821797034ecf1988ba03aab8e712e59199451a36\": rpc error: code = NotFound desc = could not find container \"363d2437e156c563d46e20b1821797034ecf1988ba03aab8e712e59199451a36\": container with ID starting with 363d2437e156c563d46e20b1821797034ecf1988ba03aab8e712e59199451a36 not found: ID does not exist" Nov 24 11:47:29 crc kubenswrapper[4789]: I1124 11:47:29.266319 4789 scope.go:117] "RemoveContainer" containerID="0dcfbd283524599b48d4a2bbe3ec153b4bbca446445be7b850b4df511b0f4111" Nov 24 11:47:29 crc kubenswrapper[4789]: E1124 11:47:29.266843 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dcfbd283524599b48d4a2bbe3ec153b4bbca446445be7b850b4df511b0f4111\": container with ID starting with 0dcfbd283524599b48d4a2bbe3ec153b4bbca446445be7b850b4df511b0f4111 not found: ID does not exist" containerID="0dcfbd283524599b48d4a2bbe3ec153b4bbca446445be7b850b4df511b0f4111" Nov 24 11:47:29 crc kubenswrapper[4789]: I1124 11:47:29.266928 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dcfbd283524599b48d4a2bbe3ec153b4bbca446445be7b850b4df511b0f4111"} err="failed to get container status \"0dcfbd283524599b48d4a2bbe3ec153b4bbca446445be7b850b4df511b0f4111\": rpc error: code = NotFound desc = could not find container \"0dcfbd283524599b48d4a2bbe3ec153b4bbca446445be7b850b4df511b0f4111\": container with ID starting with 0dcfbd283524599b48d4a2bbe3ec153b4bbca446445be7b850b4df511b0f4111 not found: ID does not exist" Nov 24 11:47:30 crc kubenswrapper[4789]: I1124 11:47:30.181077 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60b78f2d-a541-467f-88f5-daeffe5c9938" path="/var/lib/kubelet/pods/60b78f2d-a541-467f-88f5-daeffe5c9938/volumes" Nov 24 11:47:30 crc kubenswrapper[4789]: I1124 11:47:30.202657 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7b56404-36d7-44f3-92c3-5835ea030fb1","Type":"ContainerStarted","Data":"1b80598968511056049557fe826e7ccc22096cd5dbf273cef5e6de1c68c2c46d"} Nov 24 11:47:30 crc kubenswrapper[4789]: I1124 11:47:30.204160 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 11:47:30 crc kubenswrapper[4789]: I1124 11:47:30.226746 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.8040239219999998 podStartE2EDuration="5.226718267s" podCreationTimestamp="2025-11-24 11:47:25 +0000 UTC" firstStartedPulling="2025-11-24 11:47:26.252755495 +0000 UTC m=+1028.835226874" lastFinishedPulling="2025-11-24 11:47:29.67544983 +0000 UTC m=+1032.257921219" observedRunningTime="2025-11-24 11:47:30.225945218 +0000 UTC m=+1032.808416637" watchObservedRunningTime="2025-11-24 11:47:30.226718267 +0000 UTC m=+1032.809189646" Nov 24 11:47:30 crc kubenswrapper[4789]: I1124 11:47:30.375781 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58db5546cc-n5hqj" Nov 24 11:47:30 crc kubenswrapper[4789]: I1124 11:47:30.469190 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-58nhn"] Nov 24 11:47:30 crc kubenswrapper[4789]: I1124 11:47:30.469383 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-869f779d85-58nhn" podUID="de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a" containerName="dnsmasq-dns" containerID="cri-o://0b00b0a4e636538f89d08a93d6b9d148448bae0fe641e152fa3e18985e622e2d" gracePeriod=10 Nov 24 11:47:30 crc kubenswrapper[4789]: I1124 11:47:30.501254 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 24 11:47:30 crc kubenswrapper[4789]: I1124 11:47:30.635344 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 11:47:31 crc kubenswrapper[4789]: I1124 11:47:31.011731 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-58nhn" Nov 24 11:47:31 crc kubenswrapper[4789]: I1124 11:47:31.079238 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a-dns-svc\") pod \"de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a\" (UID: \"de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a\") " Nov 24 11:47:31 crc kubenswrapper[4789]: I1124 11:47:31.079333 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a-config\") pod \"de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a\" (UID: \"de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a\") " Nov 24 11:47:31 crc kubenswrapper[4789]: I1124 11:47:31.079378 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lnhw\" (UniqueName: \"kubernetes.io/projected/de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a-kube-api-access-7lnhw\") pod \"de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a\" (UID: \"de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a\") " Nov 24 11:47:31 crc kubenswrapper[4789]: I1124 11:47:31.079864 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a-ovsdbserver-sb\") pod \"de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a\" (UID: \"de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a\") " Nov 24 11:47:31 crc kubenswrapper[4789]: I1124 11:47:31.079979 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a-ovsdbserver-nb\") pod \"de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a\" (UID: \"de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a\") " Nov 24 11:47:31 crc kubenswrapper[4789]: I1124 11:47:31.093754 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a-kube-api-access-7lnhw" (OuterVolumeSpecName: "kube-api-access-7lnhw") pod "de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a" (UID: "de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a"). InnerVolumeSpecName "kube-api-access-7lnhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:47:31 crc kubenswrapper[4789]: I1124 11:47:31.131489 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a-config" (OuterVolumeSpecName: "config") pod "de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a" (UID: "de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:47:31 crc kubenswrapper[4789]: I1124 11:47:31.143948 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a" (UID: "de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:47:31 crc kubenswrapper[4789]: I1124 11:47:31.147282 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a" (UID: "de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:47:31 crc kubenswrapper[4789]: I1124 11:47:31.159585 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a" (UID: "de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:47:31 crc kubenswrapper[4789]: I1124 11:47:31.181966 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:31 crc kubenswrapper[4789]: I1124 11:47:31.181996 4789 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:31 crc kubenswrapper[4789]: I1124 11:47:31.182018 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:31 crc kubenswrapper[4789]: I1124 11:47:31.182028 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lnhw\" (UniqueName: \"kubernetes.io/projected/de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a-kube-api-access-7lnhw\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:31 crc kubenswrapper[4789]: I1124 11:47:31.182041 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:31 crc kubenswrapper[4789]: I1124 11:47:31.222266 4789 generic.go:334] "Generic (PLEG): container finished" podID="de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a" containerID="0b00b0a4e636538f89d08a93d6b9d148448bae0fe641e152fa3e18985e622e2d" exitCode=0 Nov 24 11:47:31 crc kubenswrapper[4789]: I1124 11:47:31.222308 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-58nhn" Nov 24 11:47:31 crc kubenswrapper[4789]: I1124 11:47:31.222392 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-58nhn" event={"ID":"de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a","Type":"ContainerDied","Data":"0b00b0a4e636538f89d08a93d6b9d148448bae0fe641e152fa3e18985e622e2d"} Nov 24 11:47:31 crc kubenswrapper[4789]: I1124 11:47:31.222441 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-58nhn" event={"ID":"de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a","Type":"ContainerDied","Data":"5c85944eff51eecdc4ffa78514fd557c4490690806bb69721d6d2d98830cf596"} Nov 24 11:47:31 crc kubenswrapper[4789]: I1124 11:47:31.222543 4789 scope.go:117] "RemoveContainer" containerID="0b00b0a4e636538f89d08a93d6b9d148448bae0fe641e152fa3e18985e622e2d" Nov 24 11:47:31 crc kubenswrapper[4789]: I1124 11:47:31.223099 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="16f2b7dc-63ee-4cc6-8787-2b15971d30b5" containerName="probe" containerID="cri-o://ae8dc0e35916dead3edd76523805911eaef39f1949c29ec17bd76f7c1834e3b6" gracePeriod=30 Nov 24 11:47:31 crc kubenswrapper[4789]: I1124 11:47:31.223081 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="16f2b7dc-63ee-4cc6-8787-2b15971d30b5" containerName="cinder-scheduler" containerID="cri-o://93e6e2db83a3cdd62df67611bb4b98cdec8e0c4fc1f5edc03d02485e4f308983" gracePeriod=30 Nov 24 11:47:31 crc kubenswrapper[4789]: I1124 11:47:31.257949 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-58nhn"] Nov 24 11:47:31 crc kubenswrapper[4789]: I1124 11:47:31.261683 4789 scope.go:117] "RemoveContainer" containerID="74c8b61da58db92c18dbc551f2a24a7707f02929ef1131d7c5e233469a577e3b" Nov 24 11:47:31 crc kubenswrapper[4789]: I1124 11:47:31.265454 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-58nhn"] Nov 24 11:47:31 crc kubenswrapper[4789]: I1124 11:47:31.284444 4789 scope.go:117] "RemoveContainer" containerID="0b00b0a4e636538f89d08a93d6b9d148448bae0fe641e152fa3e18985e622e2d" Nov 24 11:47:31 crc kubenswrapper[4789]: E1124 11:47:31.284814 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b00b0a4e636538f89d08a93d6b9d148448bae0fe641e152fa3e18985e622e2d\": container with ID starting with 0b00b0a4e636538f89d08a93d6b9d148448bae0fe641e152fa3e18985e622e2d not found: ID does not exist" containerID="0b00b0a4e636538f89d08a93d6b9d148448bae0fe641e152fa3e18985e622e2d" Nov 24 11:47:31 crc kubenswrapper[4789]: I1124 11:47:31.284842 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b00b0a4e636538f89d08a93d6b9d148448bae0fe641e152fa3e18985e622e2d"} err="failed to get container status \"0b00b0a4e636538f89d08a93d6b9d148448bae0fe641e152fa3e18985e622e2d\": rpc error: code = NotFound desc = could not find container \"0b00b0a4e636538f89d08a93d6b9d148448bae0fe641e152fa3e18985e622e2d\": container with ID starting with 0b00b0a4e636538f89d08a93d6b9d148448bae0fe641e152fa3e18985e622e2d not found: ID does not exist" Nov 24 11:47:31 crc kubenswrapper[4789]: I1124 11:47:31.284864 4789 scope.go:117] "RemoveContainer" containerID="74c8b61da58db92c18dbc551f2a24a7707f02929ef1131d7c5e233469a577e3b" Nov 24 11:47:31 crc kubenswrapper[4789]: E1124 11:47:31.286028 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74c8b61da58db92c18dbc551f2a24a7707f02929ef1131d7c5e233469a577e3b\": container with ID starting with 74c8b61da58db92c18dbc551f2a24a7707f02929ef1131d7c5e233469a577e3b not found: ID does not exist" containerID="74c8b61da58db92c18dbc551f2a24a7707f02929ef1131d7c5e233469a577e3b" Nov 24 11:47:31 crc kubenswrapper[4789]: I1124 11:47:31.286055 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74c8b61da58db92c18dbc551f2a24a7707f02929ef1131d7c5e233469a577e3b"} err="failed to get container status \"74c8b61da58db92c18dbc551f2a24a7707f02929ef1131d7c5e233469a577e3b\": rpc error: code = NotFound desc = could not find container \"74c8b61da58db92c18dbc551f2a24a7707f02929ef1131d7c5e233469a577e3b\": container with ID starting with 74c8b61da58db92c18dbc551f2a24a7707f02929ef1131d7c5e233469a577e3b not found: ID does not exist" Nov 24 11:47:31 crc kubenswrapper[4789]: I1124 11:47:31.855733 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f8c9d6bfb-grt9w" Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.000862 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0a5ba08-77d3-4c41-b6b0-5efd19c469fe-combined-ca-bundle\") pod \"a0a5ba08-77d3-4c41-b6b0-5efd19c469fe\" (UID: \"a0a5ba08-77d3-4c41-b6b0-5efd19c469fe\") " Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.001419 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a0a5ba08-77d3-4c41-b6b0-5efd19c469fe-httpd-config\") pod \"a0a5ba08-77d3-4c41-b6b0-5efd19c469fe\" (UID: \"a0a5ba08-77d3-4c41-b6b0-5efd19c469fe\") " Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.001448 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a0a5ba08-77d3-4c41-b6b0-5efd19c469fe-config\") pod \"a0a5ba08-77d3-4c41-b6b0-5efd19c469fe\" (UID: \"a0a5ba08-77d3-4c41-b6b0-5efd19c469fe\") " Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.001518 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0a5ba08-77d3-4c41-b6b0-5efd19c469fe-ovndb-tls-certs\") pod \"a0a5ba08-77d3-4c41-b6b0-5efd19c469fe\" (UID: \"a0a5ba08-77d3-4c41-b6b0-5efd19c469fe\") " Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.001581 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cgfh\" (UniqueName: \"kubernetes.io/projected/a0a5ba08-77d3-4c41-b6b0-5efd19c469fe-kube-api-access-7cgfh\") pod \"a0a5ba08-77d3-4c41-b6b0-5efd19c469fe\" (UID: \"a0a5ba08-77d3-4c41-b6b0-5efd19c469fe\") " Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.023526 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0a5ba08-77d3-4c41-b6b0-5efd19c469fe-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "a0a5ba08-77d3-4c41-b6b0-5efd19c469fe" (UID: "a0a5ba08-77d3-4c41-b6b0-5efd19c469fe"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.024269 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0a5ba08-77d3-4c41-b6b0-5efd19c469fe-kube-api-access-7cgfh" (OuterVolumeSpecName: "kube-api-access-7cgfh") pod "a0a5ba08-77d3-4c41-b6b0-5efd19c469fe" (UID: "a0a5ba08-77d3-4c41-b6b0-5efd19c469fe"). InnerVolumeSpecName "kube-api-access-7cgfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.086751 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0a5ba08-77d3-4c41-b6b0-5efd19c469fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0a5ba08-77d3-4c41-b6b0-5efd19c469fe" (UID: "a0a5ba08-77d3-4c41-b6b0-5efd19c469fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.086749 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0a5ba08-77d3-4c41-b6b0-5efd19c469fe-config" (OuterVolumeSpecName: "config") pod "a0a5ba08-77d3-4c41-b6b0-5efd19c469fe" (UID: "a0a5ba08-77d3-4c41-b6b0-5efd19c469fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.103478 4789 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a0a5ba08-77d3-4c41-b6b0-5efd19c469fe-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.103518 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a0a5ba08-77d3-4c41-b6b0-5efd19c469fe-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.103526 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cgfh\" (UniqueName: \"kubernetes.io/projected/a0a5ba08-77d3-4c41-b6b0-5efd19c469fe-kube-api-access-7cgfh\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.103540 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0a5ba08-77d3-4c41-b6b0-5efd19c469fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.120400 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0a5ba08-77d3-4c41-b6b0-5efd19c469fe-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "a0a5ba08-77d3-4c41-b6b0-5efd19c469fe" (UID: "a0a5ba08-77d3-4c41-b6b0-5efd19c469fe"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.183724 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a" path="/var/lib/kubelet/pods/de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a/volumes" Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.205892 4789 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0a5ba08-77d3-4c41-b6b0-5efd19c469fe-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.235054 4789 generic.go:334] "Generic (PLEG): container finished" podID="a0a5ba08-77d3-4c41-b6b0-5efd19c469fe" containerID="bb32f3b4ddf429d5fba90a3afcd08a133d98739592534e9d48f50034b0bfa71a" exitCode=0 Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.235126 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f8c9d6bfb-grt9w" event={"ID":"a0a5ba08-77d3-4c41-b6b0-5efd19c469fe","Type":"ContainerDied","Data":"bb32f3b4ddf429d5fba90a3afcd08a133d98739592534e9d48f50034b0bfa71a"} Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.235159 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f8c9d6bfb-grt9w" event={"ID":"a0a5ba08-77d3-4c41-b6b0-5efd19c469fe","Type":"ContainerDied","Data":"0a2b6561feee5c12a428795f1898503b5be52382e0c1ce1df0b6fb925d32e32c"} Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.235179 4789 scope.go:117] "RemoveContainer" containerID="98fa3a21acad22e8c4c3803a0becf30981b25659b3876be43f2dad4ce79d615d" Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.235297 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f8c9d6bfb-grt9w" Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.239890 4789 generic.go:334] "Generic (PLEG): container finished" podID="16f2b7dc-63ee-4cc6-8787-2b15971d30b5" containerID="ae8dc0e35916dead3edd76523805911eaef39f1949c29ec17bd76f7c1834e3b6" exitCode=0 Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.239949 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"16f2b7dc-63ee-4cc6-8787-2b15971d30b5","Type":"ContainerDied","Data":"ae8dc0e35916dead3edd76523805911eaef39f1949c29ec17bd76f7c1834e3b6"} Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.257879 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-f8c9d6bfb-grt9w"] Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.264843 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-f8c9d6bfb-grt9w"] Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.266101 4789 scope.go:117] "RemoveContainer" containerID="bb32f3b4ddf429d5fba90a3afcd08a133d98739592534e9d48f50034b0bfa71a" Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.331010 4789 scope.go:117] "RemoveContainer" containerID="98fa3a21acad22e8c4c3803a0becf30981b25659b3876be43f2dad4ce79d615d" Nov 24 11:47:32 crc kubenswrapper[4789]: E1124 11:47:32.331373 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98fa3a21acad22e8c4c3803a0becf30981b25659b3876be43f2dad4ce79d615d\": container with ID starting with 98fa3a21acad22e8c4c3803a0becf30981b25659b3876be43f2dad4ce79d615d not found: ID does not exist" containerID="98fa3a21acad22e8c4c3803a0becf30981b25659b3876be43f2dad4ce79d615d" Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.331413 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98fa3a21acad22e8c4c3803a0becf30981b25659b3876be43f2dad4ce79d615d"} err="failed to get container status \"98fa3a21acad22e8c4c3803a0becf30981b25659b3876be43f2dad4ce79d615d\": rpc error: code = NotFound desc = could not find container \"98fa3a21acad22e8c4c3803a0becf30981b25659b3876be43f2dad4ce79d615d\": container with ID starting with 98fa3a21acad22e8c4c3803a0becf30981b25659b3876be43f2dad4ce79d615d not found: ID does not exist" Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.331437 4789 scope.go:117] "RemoveContainer" containerID="bb32f3b4ddf429d5fba90a3afcd08a133d98739592534e9d48f50034b0bfa71a" Nov 24 11:47:32 crc kubenswrapper[4789]: E1124 11:47:32.331743 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb32f3b4ddf429d5fba90a3afcd08a133d98739592534e9d48f50034b0bfa71a\": container with ID starting with bb32f3b4ddf429d5fba90a3afcd08a133d98739592534e9d48f50034b0bfa71a not found: ID does not exist" containerID="bb32f3b4ddf429d5fba90a3afcd08a133d98739592534e9d48f50034b0bfa71a" Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.331771 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb32f3b4ddf429d5fba90a3afcd08a133d98739592534e9d48f50034b0bfa71a"} err="failed to get container status \"bb32f3b4ddf429d5fba90a3afcd08a133d98739592534e9d48f50034b0bfa71a\": rpc error: code = NotFound desc = could not find container \"bb32f3b4ddf429d5fba90a3afcd08a133d98739592534e9d48f50034b0bfa71a\": container with ID starting with bb32f3b4ddf429d5fba90a3afcd08a133d98739592534e9d48f50034b0bfa71a not found: ID does not exist" Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.745480 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.817486 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q694b\" (UniqueName: \"kubernetes.io/projected/16f2b7dc-63ee-4cc6-8787-2b15971d30b5-kube-api-access-q694b\") pod \"16f2b7dc-63ee-4cc6-8787-2b15971d30b5\" (UID: \"16f2b7dc-63ee-4cc6-8787-2b15971d30b5\") " Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.817542 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16f2b7dc-63ee-4cc6-8787-2b15971d30b5-config-data-custom\") pod \"16f2b7dc-63ee-4cc6-8787-2b15971d30b5\" (UID: \"16f2b7dc-63ee-4cc6-8787-2b15971d30b5\") " Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.817576 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16f2b7dc-63ee-4cc6-8787-2b15971d30b5-config-data\") pod \"16f2b7dc-63ee-4cc6-8787-2b15971d30b5\" (UID: \"16f2b7dc-63ee-4cc6-8787-2b15971d30b5\") " Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.817601 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16f2b7dc-63ee-4cc6-8787-2b15971d30b5-scripts\") pod \"16f2b7dc-63ee-4cc6-8787-2b15971d30b5\" (UID: \"16f2b7dc-63ee-4cc6-8787-2b15971d30b5\") " Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.817639 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/16f2b7dc-63ee-4cc6-8787-2b15971d30b5-etc-machine-id\") pod \"16f2b7dc-63ee-4cc6-8787-2b15971d30b5\" (UID: \"16f2b7dc-63ee-4cc6-8787-2b15971d30b5\") " Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.817700 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16f2b7dc-63ee-4cc6-8787-2b15971d30b5-combined-ca-bundle\") pod \"16f2b7dc-63ee-4cc6-8787-2b15971d30b5\" (UID: \"16f2b7dc-63ee-4cc6-8787-2b15971d30b5\") " Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.825257 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16f2b7dc-63ee-4cc6-8787-2b15971d30b5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "16f2b7dc-63ee-4cc6-8787-2b15971d30b5" (UID: "16f2b7dc-63ee-4cc6-8787-2b15971d30b5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.827586 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16f2b7dc-63ee-4cc6-8787-2b15971d30b5-kube-api-access-q694b" (OuterVolumeSpecName: "kube-api-access-q694b") pod "16f2b7dc-63ee-4cc6-8787-2b15971d30b5" (UID: "16f2b7dc-63ee-4cc6-8787-2b15971d30b5"). InnerVolumeSpecName "kube-api-access-q694b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.827775 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16f2b7dc-63ee-4cc6-8787-2b15971d30b5-scripts" (OuterVolumeSpecName: "scripts") pod "16f2b7dc-63ee-4cc6-8787-2b15971d30b5" (UID: "16f2b7dc-63ee-4cc6-8787-2b15971d30b5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.840689 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16f2b7dc-63ee-4cc6-8787-2b15971d30b5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "16f2b7dc-63ee-4cc6-8787-2b15971d30b5" (UID: "16f2b7dc-63ee-4cc6-8787-2b15971d30b5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.920089 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q694b\" (UniqueName: \"kubernetes.io/projected/16f2b7dc-63ee-4cc6-8787-2b15971d30b5-kube-api-access-q694b\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.920121 4789 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16f2b7dc-63ee-4cc6-8787-2b15971d30b5-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.920132 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16f2b7dc-63ee-4cc6-8787-2b15971d30b5-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.920140 4789 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/16f2b7dc-63ee-4cc6-8787-2b15971d30b5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.932695 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16f2b7dc-63ee-4cc6-8787-2b15971d30b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16f2b7dc-63ee-4cc6-8787-2b15971d30b5" (UID: "16f2b7dc-63ee-4cc6-8787-2b15971d30b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:32 crc kubenswrapper[4789]: I1124 11:47:32.973407 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16f2b7dc-63ee-4cc6-8787-2b15971d30b5-config-data" (OuterVolumeSpecName: "config-data") pod "16f2b7dc-63ee-4cc6-8787-2b15971d30b5" (UID: "16f2b7dc-63ee-4cc6-8787-2b15971d30b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.021362 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16f2b7dc-63ee-4cc6-8787-2b15971d30b5-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.021391 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16f2b7dc-63ee-4cc6-8787-2b15971d30b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.117945 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.260143 4789 generic.go:334] "Generic (PLEG): container finished" podID="16f2b7dc-63ee-4cc6-8787-2b15971d30b5" containerID="93e6e2db83a3cdd62df67611bb4b98cdec8e0c4fc1f5edc03d02485e4f308983" exitCode=0 Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.260183 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"16f2b7dc-63ee-4cc6-8787-2b15971d30b5","Type":"ContainerDied","Data":"93e6e2db83a3cdd62df67611bb4b98cdec8e0c4fc1f5edc03d02485e4f308983"} Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.260211 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"16f2b7dc-63ee-4cc6-8787-2b15971d30b5","Type":"ContainerDied","Data":"2a5377b4a06d868c8ef013d098a6a7f32a039b02325dd5ecc570e286432c1296"} Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.260228 4789 scope.go:117] "RemoveContainer" containerID="ae8dc0e35916dead3edd76523805911eaef39f1949c29ec17bd76f7c1834e3b6" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.260228 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.286664 4789 scope.go:117] "RemoveContainer" containerID="93e6e2db83a3cdd62df67611bb4b98cdec8e0c4fc1f5edc03d02485e4f308983" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.291801 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.311755 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.318370 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 11:47:33 crc kubenswrapper[4789]: E1124 11:47:33.318678 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60b78f2d-a541-467f-88f5-daeffe5c9938" containerName="barbican-api-log" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.318695 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="60b78f2d-a541-467f-88f5-daeffe5c9938" containerName="barbican-api-log" Nov 24 11:47:33 crc kubenswrapper[4789]: E1124 11:47:33.318714 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60b78f2d-a541-467f-88f5-daeffe5c9938" containerName="barbican-api" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.318721 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="60b78f2d-a541-467f-88f5-daeffe5c9938" containerName="barbican-api" Nov 24 11:47:33 crc kubenswrapper[4789]: E1124 11:47:33.318736 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0a5ba08-77d3-4c41-b6b0-5efd19c469fe" containerName="neutron-api" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.318742 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0a5ba08-77d3-4c41-b6b0-5efd19c469fe" containerName="neutron-api" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.322758 4789 scope.go:117] "RemoveContainer" containerID="ae8dc0e35916dead3edd76523805911eaef39f1949c29ec17bd76f7c1834e3b6" Nov 24 11:47:33 crc kubenswrapper[4789]: E1124 11:47:33.323504 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae8dc0e35916dead3edd76523805911eaef39f1949c29ec17bd76f7c1834e3b6\": container with ID starting with ae8dc0e35916dead3edd76523805911eaef39f1949c29ec17bd76f7c1834e3b6 not found: ID does not exist" containerID="ae8dc0e35916dead3edd76523805911eaef39f1949c29ec17bd76f7c1834e3b6" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.323533 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae8dc0e35916dead3edd76523805911eaef39f1949c29ec17bd76f7c1834e3b6"} err="failed to get container status \"ae8dc0e35916dead3edd76523805911eaef39f1949c29ec17bd76f7c1834e3b6\": rpc error: code = NotFound desc = could not find container \"ae8dc0e35916dead3edd76523805911eaef39f1949c29ec17bd76f7c1834e3b6\": container with ID starting with ae8dc0e35916dead3edd76523805911eaef39f1949c29ec17bd76f7c1834e3b6 not found: ID does not exist" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.323555 4789 scope.go:117] "RemoveContainer" containerID="93e6e2db83a3cdd62df67611bb4b98cdec8e0c4fc1f5edc03d02485e4f308983" Nov 24 11:47:33 crc kubenswrapper[4789]: E1124 11:47:33.323539 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a" containerName="dnsmasq-dns" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.323640 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a" containerName="dnsmasq-dns" Nov 24 11:47:33 crc kubenswrapper[4789]: E1124 11:47:33.323696 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16f2b7dc-63ee-4cc6-8787-2b15971d30b5" containerName="cinder-scheduler" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.323707 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="16f2b7dc-63ee-4cc6-8787-2b15971d30b5" containerName="cinder-scheduler" Nov 24 11:47:33 crc kubenswrapper[4789]: E1124 11:47:33.323727 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0a5ba08-77d3-4c41-b6b0-5efd19c469fe" containerName="neutron-httpd" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.323736 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0a5ba08-77d3-4c41-b6b0-5efd19c469fe" containerName="neutron-httpd" Nov 24 11:47:33 crc kubenswrapper[4789]: E1124 11:47:33.323757 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93e6e2db83a3cdd62df67611bb4b98cdec8e0c4fc1f5edc03d02485e4f308983\": container with ID starting with 93e6e2db83a3cdd62df67611bb4b98cdec8e0c4fc1f5edc03d02485e4f308983 not found: ID does not exist" containerID="93e6e2db83a3cdd62df67611bb4b98cdec8e0c4fc1f5edc03d02485e4f308983" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.323778 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93e6e2db83a3cdd62df67611bb4b98cdec8e0c4fc1f5edc03d02485e4f308983"} err="failed to get container status \"93e6e2db83a3cdd62df67611bb4b98cdec8e0c4fc1f5edc03d02485e4f308983\": rpc error: code = NotFound desc = could not find container \"93e6e2db83a3cdd62df67611bb4b98cdec8e0c4fc1f5edc03d02485e4f308983\": container with ID starting with 93e6e2db83a3cdd62df67611bb4b98cdec8e0c4fc1f5edc03d02485e4f308983 not found: ID does not exist" Nov 24 11:47:33 crc kubenswrapper[4789]: E1124 11:47:33.323786 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a" containerName="init" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.323796 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a" containerName="init" Nov 24 11:47:33 crc kubenswrapper[4789]: E1124 11:47:33.323814 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16f2b7dc-63ee-4cc6-8787-2b15971d30b5" containerName="probe" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.323821 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="16f2b7dc-63ee-4cc6-8787-2b15971d30b5" containerName="probe" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.324203 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="16f2b7dc-63ee-4cc6-8787-2b15971d30b5" containerName="cinder-scheduler" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.324229 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="de1f6cf9-b04d-4cd3-bb5e-bfdc91ab101a" containerName="dnsmasq-dns" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.324246 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0a5ba08-77d3-4c41-b6b0-5efd19c469fe" containerName="neutron-httpd" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.324261 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="60b78f2d-a541-467f-88f5-daeffe5c9938" containerName="barbican-api" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.324272 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0a5ba08-77d3-4c41-b6b0-5efd19c469fe" containerName="neutron-api" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.324291 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="60b78f2d-a541-467f-88f5-daeffe5c9938" containerName="barbican-api-log" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.324307 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="16f2b7dc-63ee-4cc6-8787-2b15971d30b5" containerName="probe" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.325452 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.331880 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.339573 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.427647 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/51c9e2b5-9521-4872-ab1a-f0981449f506-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"51c9e2b5-9521-4872-ab1a-f0981449f506\") " pod="openstack/cinder-scheduler-0" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.427737 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51c9e2b5-9521-4872-ab1a-f0981449f506-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"51c9e2b5-9521-4872-ab1a-f0981449f506\") " pod="openstack/cinder-scheduler-0" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.427759 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51c9e2b5-9521-4872-ab1a-f0981449f506-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"51c9e2b5-9521-4872-ab1a-f0981449f506\") " pod="openstack/cinder-scheduler-0" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.427803 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51c9e2b5-9521-4872-ab1a-f0981449f506-config-data\") pod \"cinder-scheduler-0\" (UID: \"51c9e2b5-9521-4872-ab1a-f0981449f506\") " pod="openstack/cinder-scheduler-0" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.427846 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdq5p\" (UniqueName: \"kubernetes.io/projected/51c9e2b5-9521-4872-ab1a-f0981449f506-kube-api-access-cdq5p\") pod \"cinder-scheduler-0\" (UID: \"51c9e2b5-9521-4872-ab1a-f0981449f506\") " pod="openstack/cinder-scheduler-0" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.427871 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51c9e2b5-9521-4872-ab1a-f0981449f506-scripts\") pod \"cinder-scheduler-0\" (UID: \"51c9e2b5-9521-4872-ab1a-f0981449f506\") " pod="openstack/cinder-scheduler-0" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.529206 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdq5p\" (UniqueName: \"kubernetes.io/projected/51c9e2b5-9521-4872-ab1a-f0981449f506-kube-api-access-cdq5p\") pod \"cinder-scheduler-0\" (UID: \"51c9e2b5-9521-4872-ab1a-f0981449f506\") " pod="openstack/cinder-scheduler-0" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.529271 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51c9e2b5-9521-4872-ab1a-f0981449f506-scripts\") pod \"cinder-scheduler-0\" (UID: \"51c9e2b5-9521-4872-ab1a-f0981449f506\") " pod="openstack/cinder-scheduler-0" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.529343 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/51c9e2b5-9521-4872-ab1a-f0981449f506-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"51c9e2b5-9521-4872-ab1a-f0981449f506\") " pod="openstack/cinder-scheduler-0" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.529407 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51c9e2b5-9521-4872-ab1a-f0981449f506-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"51c9e2b5-9521-4872-ab1a-f0981449f506\") " pod="openstack/cinder-scheduler-0" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.529429 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51c9e2b5-9521-4872-ab1a-f0981449f506-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"51c9e2b5-9521-4872-ab1a-f0981449f506\") " pod="openstack/cinder-scheduler-0" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.529495 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51c9e2b5-9521-4872-ab1a-f0981449f506-config-data\") pod \"cinder-scheduler-0\" (UID: \"51c9e2b5-9521-4872-ab1a-f0981449f506\") " pod="openstack/cinder-scheduler-0" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.531087 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/51c9e2b5-9521-4872-ab1a-f0981449f506-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"51c9e2b5-9521-4872-ab1a-f0981449f506\") " pod="openstack/cinder-scheduler-0" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.534483 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51c9e2b5-9521-4872-ab1a-f0981449f506-config-data\") pod \"cinder-scheduler-0\" (UID: \"51c9e2b5-9521-4872-ab1a-f0981449f506\") " pod="openstack/cinder-scheduler-0" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.534886 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51c9e2b5-9521-4872-ab1a-f0981449f506-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"51c9e2b5-9521-4872-ab1a-f0981449f506\") " pod="openstack/cinder-scheduler-0" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.535543 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51c9e2b5-9521-4872-ab1a-f0981449f506-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"51c9e2b5-9521-4872-ab1a-f0981449f506\") " pod="openstack/cinder-scheduler-0" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.535566 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51c9e2b5-9521-4872-ab1a-f0981449f506-scripts\") pod \"cinder-scheduler-0\" (UID: \"51c9e2b5-9521-4872-ab1a-f0981449f506\") " pod="openstack/cinder-scheduler-0" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.549200 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdq5p\" (UniqueName: \"kubernetes.io/projected/51c9e2b5-9521-4872-ab1a-f0981449f506-kube-api-access-cdq5p\") pod \"cinder-scheduler-0\" (UID: \"51c9e2b5-9521-4872-ab1a-f0981449f506\") " pod="openstack/cinder-scheduler-0" Nov 24 11:47:33 crc kubenswrapper[4789]: I1124 11:47:33.645116 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 11:47:34 crc kubenswrapper[4789]: I1124 11:47:34.104304 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 11:47:34 crc kubenswrapper[4789]: W1124 11:47:34.105642 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51c9e2b5_9521_4872_ab1a_f0981449f506.slice/crio-465ee335695712a36ae7aabc58ad0f34488d8c8f2e36352d5e502520729ee6e5 WatchSource:0}: Error finding container 465ee335695712a36ae7aabc58ad0f34488d8c8f2e36352d5e502520729ee6e5: Status 404 returned error can't find the container with id 465ee335695712a36ae7aabc58ad0f34488d8c8f2e36352d5e502520729ee6e5 Nov 24 11:47:34 crc kubenswrapper[4789]: I1124 11:47:34.183044 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16f2b7dc-63ee-4cc6-8787-2b15971d30b5" path="/var/lib/kubelet/pods/16f2b7dc-63ee-4cc6-8787-2b15971d30b5/volumes" Nov 24 11:47:34 crc kubenswrapper[4789]: I1124 11:47:34.183844 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0a5ba08-77d3-4c41-b6b0-5efd19c469fe" path="/var/lib/kubelet/pods/a0a5ba08-77d3-4c41-b6b0-5efd19c469fe/volumes" Nov 24 11:47:34 crc kubenswrapper[4789]: I1124 11:47:34.271746 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"51c9e2b5-9521-4872-ab1a-f0981449f506","Type":"ContainerStarted","Data":"465ee335695712a36ae7aabc58ad0f34488d8c8f2e36352d5e502520729ee6e5"} Nov 24 11:47:35 crc kubenswrapper[4789]: I1124 11:47:35.281383 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"51c9e2b5-9521-4872-ab1a-f0981449f506","Type":"ContainerStarted","Data":"b4283a24d140ac9cb6e055a4aee9c5b66fe3f29fb13186dcf28a6eadc35e46d1"} Nov 24 11:47:35 crc kubenswrapper[4789]: I1124 11:47:35.282544 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"51c9e2b5-9521-4872-ab1a-f0981449f506","Type":"ContainerStarted","Data":"98fde0ee18bd2b77cb984f5ec55dba4d528b8716ecb6dc7a67b8a558439e07b5"} Nov 24 11:47:35 crc kubenswrapper[4789]: I1124 11:47:35.307381 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.307367102 podStartE2EDuration="2.307367102s" podCreationTimestamp="2025-11-24 11:47:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:47:35.301340577 +0000 UTC m=+1037.883811956" watchObservedRunningTime="2025-11-24 11:47:35.307367102 +0000 UTC m=+1037.889838471" Nov 24 11:47:35 crc kubenswrapper[4789]: I1124 11:47:35.603703 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-784c4967d9-9h8jd" Nov 24 11:47:37 crc kubenswrapper[4789]: I1124 11:47:37.753728 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 24 11:47:37 crc kubenswrapper[4789]: I1124 11:47:37.755783 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 11:47:37 crc kubenswrapper[4789]: I1124 11:47:37.760958 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-2z8xs" Nov 24 11:47:37 crc kubenswrapper[4789]: I1124 11:47:37.761714 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 24 11:47:37 crc kubenswrapper[4789]: I1124 11:47:37.768272 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 24 11:47:37 crc kubenswrapper[4789]: I1124 11:47:37.772566 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 24 11:47:37 crc kubenswrapper[4789]: I1124 11:47:37.917431 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6zx6\" (UniqueName: \"kubernetes.io/projected/a99ba3d3-fb7c-4187-b74d-3643fb11d6aa-kube-api-access-m6zx6\") pod \"openstackclient\" (UID: \"a99ba3d3-fb7c-4187-b74d-3643fb11d6aa\") " pod="openstack/openstackclient" Nov 24 11:47:37 crc kubenswrapper[4789]: I1124 11:47:37.917560 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a99ba3d3-fb7c-4187-b74d-3643fb11d6aa-openstack-config-secret\") pod \"openstackclient\" (UID: \"a99ba3d3-fb7c-4187-b74d-3643fb11d6aa\") " pod="openstack/openstackclient" Nov 24 11:47:37 crc kubenswrapper[4789]: I1124 11:47:37.917615 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a99ba3d3-fb7c-4187-b74d-3643fb11d6aa-openstack-config\") pod \"openstackclient\" (UID: \"a99ba3d3-fb7c-4187-b74d-3643fb11d6aa\") " pod="openstack/openstackclient" Nov 24 11:47:37 crc kubenswrapper[4789]: I1124 11:47:37.917686 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a99ba3d3-fb7c-4187-b74d-3643fb11d6aa-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a99ba3d3-fb7c-4187-b74d-3643fb11d6aa\") " pod="openstack/openstackclient" Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.019823 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a99ba3d3-fb7c-4187-b74d-3643fb11d6aa-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a99ba3d3-fb7c-4187-b74d-3643fb11d6aa\") " pod="openstack/openstackclient" Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.020011 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6zx6\" (UniqueName: \"kubernetes.io/projected/a99ba3d3-fb7c-4187-b74d-3643fb11d6aa-kube-api-access-m6zx6\") pod \"openstackclient\" (UID: \"a99ba3d3-fb7c-4187-b74d-3643fb11d6aa\") " pod="openstack/openstackclient" Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.020104 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a99ba3d3-fb7c-4187-b74d-3643fb11d6aa-openstack-config-secret\") pod \"openstackclient\" (UID: \"a99ba3d3-fb7c-4187-b74d-3643fb11d6aa\") " pod="openstack/openstackclient" Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.020153 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a99ba3d3-fb7c-4187-b74d-3643fb11d6aa-openstack-config\") pod \"openstackclient\" (UID: \"a99ba3d3-fb7c-4187-b74d-3643fb11d6aa\") " pod="openstack/openstackclient" Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.021490 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a99ba3d3-fb7c-4187-b74d-3643fb11d6aa-openstack-config\") pod \"openstackclient\" (UID: \"a99ba3d3-fb7c-4187-b74d-3643fb11d6aa\") " pod="openstack/openstackclient" Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.035049 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a99ba3d3-fb7c-4187-b74d-3643fb11d6aa-openstack-config-secret\") pod \"openstackclient\" (UID: \"a99ba3d3-fb7c-4187-b74d-3643fb11d6aa\") " pod="openstack/openstackclient" Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.036116 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a99ba3d3-fb7c-4187-b74d-3643fb11d6aa-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a99ba3d3-fb7c-4187-b74d-3643fb11d6aa\") " pod="openstack/openstackclient" Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.043559 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6zx6\" (UniqueName: \"kubernetes.io/projected/a99ba3d3-fb7c-4187-b74d-3643fb11d6aa-kube-api-access-m6zx6\") pod \"openstackclient\" (UID: \"a99ba3d3-fb7c-4187-b74d-3643fb11d6aa\") " pod="openstack/openstackclient" Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.075798 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.080626 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.094090 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.122949 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.123922 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.156278 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.222324 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/06801047-ac5f-4da6-a843-3c064e628c38-openstack-config\") pod \"openstackclient\" (UID: \"06801047-ac5f-4da6-a843-3c064e628c38\") " pod="openstack/openstackclient" Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.222404 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f26tq\" (UniqueName: \"kubernetes.io/projected/06801047-ac5f-4da6-a843-3c064e628c38-kube-api-access-f26tq\") pod \"openstackclient\" (UID: \"06801047-ac5f-4da6-a843-3c064e628c38\") " pod="openstack/openstackclient" Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.222434 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/06801047-ac5f-4da6-a843-3c064e628c38-openstack-config-secret\") pod \"openstackclient\" (UID: \"06801047-ac5f-4da6-a843-3c064e628c38\") " pod="openstack/openstackclient" Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.222552 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06801047-ac5f-4da6-a843-3c064e628c38-combined-ca-bundle\") pod \"openstackclient\" (UID: \"06801047-ac5f-4da6-a843-3c064e628c38\") " pod="openstack/openstackclient" Nov 24 11:47:38 crc kubenswrapper[4789]: E1124 11:47:38.227018 4789 log.go:32] "RunPodSandbox from runtime service failed" err=< Nov 24 11:47:38 crc kubenswrapper[4789]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_a99ba3d3-fb7c-4187-b74d-3643fb11d6aa_0(09d6409104ac787ac0e4eaf466a2f7ea5560be84c801b70b81f919a72d348742): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"09d6409104ac787ac0e4eaf466a2f7ea5560be84c801b70b81f919a72d348742" Netns:"/var/run/netns/a0a7c856-32ab-4a66-9f27-0ac5e08d0481" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=09d6409104ac787ac0e4eaf466a2f7ea5560be84c801b70b81f919a72d348742;K8S_POD_UID=a99ba3d3-fb7c-4187-b74d-3643fb11d6aa" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/a99ba3d3-fb7c-4187-b74d-3643fb11d6aa]: expected pod UID "a99ba3d3-fb7c-4187-b74d-3643fb11d6aa" but got "06801047-ac5f-4da6-a843-3c064e628c38" from Kube API Nov 24 11:47:38 crc kubenswrapper[4789]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 24 11:47:38 crc kubenswrapper[4789]: > Nov 24 11:47:38 crc kubenswrapper[4789]: E1124 11:47:38.227079 4789 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Nov 24 11:47:38 crc kubenswrapper[4789]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_a99ba3d3-fb7c-4187-b74d-3643fb11d6aa_0(09d6409104ac787ac0e4eaf466a2f7ea5560be84c801b70b81f919a72d348742): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"09d6409104ac787ac0e4eaf466a2f7ea5560be84c801b70b81f919a72d348742" Netns:"/var/run/netns/a0a7c856-32ab-4a66-9f27-0ac5e08d0481" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=09d6409104ac787ac0e4eaf466a2f7ea5560be84c801b70b81f919a72d348742;K8S_POD_UID=a99ba3d3-fb7c-4187-b74d-3643fb11d6aa" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/a99ba3d3-fb7c-4187-b74d-3643fb11d6aa]: expected pod UID "a99ba3d3-fb7c-4187-b74d-3643fb11d6aa" but got "06801047-ac5f-4da6-a843-3c064e628c38" from Kube API Nov 24 11:47:38 crc kubenswrapper[4789]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 24 11:47:38 crc kubenswrapper[4789]: > pod="openstack/openstackclient" Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.304509 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.314611 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.324442 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/06801047-ac5f-4da6-a843-3c064e628c38-openstack-config\") pod \"openstackclient\" (UID: \"06801047-ac5f-4da6-a843-3c064e628c38\") " pod="openstack/openstackclient" Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.324538 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f26tq\" (UniqueName: \"kubernetes.io/projected/06801047-ac5f-4da6-a843-3c064e628c38-kube-api-access-f26tq\") pod \"openstackclient\" (UID: \"06801047-ac5f-4da6-a843-3c064e628c38\") " pod="openstack/openstackclient" Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.324566 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/06801047-ac5f-4da6-a843-3c064e628c38-openstack-config-secret\") pod \"openstackclient\" (UID: \"06801047-ac5f-4da6-a843-3c064e628c38\") " pod="openstack/openstackclient" Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.324600 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06801047-ac5f-4da6-a843-3c064e628c38-combined-ca-bundle\") pod \"openstackclient\" (UID: \"06801047-ac5f-4da6-a843-3c064e628c38\") " pod="openstack/openstackclient" Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.325670 4789 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="a99ba3d3-fb7c-4187-b74d-3643fb11d6aa" podUID="06801047-ac5f-4da6-a843-3c064e628c38" Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.325739 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/06801047-ac5f-4da6-a843-3c064e628c38-openstack-config\") pod \"openstackclient\" (UID: \"06801047-ac5f-4da6-a843-3c064e628c38\") " pod="openstack/openstackclient" Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.329076 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/06801047-ac5f-4da6-a843-3c064e628c38-openstack-config-secret\") pod \"openstackclient\" (UID: \"06801047-ac5f-4da6-a843-3c064e628c38\") " pod="openstack/openstackclient" Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.329402 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06801047-ac5f-4da6-a843-3c064e628c38-combined-ca-bundle\") pod \"openstackclient\" (UID: \"06801047-ac5f-4da6-a843-3c064e628c38\") " pod="openstack/openstackclient" Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.345976 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f26tq\" (UniqueName: \"kubernetes.io/projected/06801047-ac5f-4da6-a843-3c064e628c38-kube-api-access-f26tq\") pod \"openstackclient\" (UID: \"06801047-ac5f-4da6-a843-3c064e628c38\") " pod="openstack/openstackclient" Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.425516 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a99ba3d3-fb7c-4187-b74d-3643fb11d6aa-openstack-config-secret\") pod \"a99ba3d3-fb7c-4187-b74d-3643fb11d6aa\" (UID: \"a99ba3d3-fb7c-4187-b74d-3643fb11d6aa\") " Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.425825 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a99ba3d3-fb7c-4187-b74d-3643fb11d6aa-combined-ca-bundle\") pod \"a99ba3d3-fb7c-4187-b74d-3643fb11d6aa\" (UID: \"a99ba3d3-fb7c-4187-b74d-3643fb11d6aa\") " Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.425940 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6zx6\" (UniqueName: \"kubernetes.io/projected/a99ba3d3-fb7c-4187-b74d-3643fb11d6aa-kube-api-access-m6zx6\") pod \"a99ba3d3-fb7c-4187-b74d-3643fb11d6aa\" (UID: \"a99ba3d3-fb7c-4187-b74d-3643fb11d6aa\") " Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.426326 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a99ba3d3-fb7c-4187-b74d-3643fb11d6aa-openstack-config\") pod \"a99ba3d3-fb7c-4187-b74d-3643fb11d6aa\" (UID: \"a99ba3d3-fb7c-4187-b74d-3643fb11d6aa\") " Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.427027 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a99ba3d3-fb7c-4187-b74d-3643fb11d6aa-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "a99ba3d3-fb7c-4187-b74d-3643fb11d6aa" (UID: "a99ba3d3-fb7c-4187-b74d-3643fb11d6aa"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.428896 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a99ba3d3-fb7c-4187-b74d-3643fb11d6aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a99ba3d3-fb7c-4187-b74d-3643fb11d6aa" (UID: "a99ba3d3-fb7c-4187-b74d-3643fb11d6aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.429502 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a99ba3d3-fb7c-4187-b74d-3643fb11d6aa-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "a99ba3d3-fb7c-4187-b74d-3643fb11d6aa" (UID: "a99ba3d3-fb7c-4187-b74d-3643fb11d6aa"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.429630 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a99ba3d3-fb7c-4187-b74d-3643fb11d6aa-kube-api-access-m6zx6" (OuterVolumeSpecName: "kube-api-access-m6zx6") pod "a99ba3d3-fb7c-4187-b74d-3643fb11d6aa" (UID: "a99ba3d3-fb7c-4187-b74d-3643fb11d6aa"). InnerVolumeSpecName "kube-api-access-m6zx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.486641 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.532056 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a99ba3d3-fb7c-4187-b74d-3643fb11d6aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.532094 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6zx6\" (UniqueName: \"kubernetes.io/projected/a99ba3d3-fb7c-4187-b74d-3643fb11d6aa-kube-api-access-m6zx6\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.532110 4789 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a99ba3d3-fb7c-4187-b74d-3643fb11d6aa-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.532122 4789 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a99ba3d3-fb7c-4187-b74d-3643fb11d6aa-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.646594 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 24 11:47:38 crc kubenswrapper[4789]: I1124 11:47:38.922700 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 24 11:47:39 crc kubenswrapper[4789]: I1124 11:47:39.312720 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 11:47:39 crc kubenswrapper[4789]: I1124 11:47:39.312931 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"06801047-ac5f-4da6-a843-3c064e628c38","Type":"ContainerStarted","Data":"011076eca3aba9f2a2a2e08dd97959d4820286fb59be39823a550d67bf37138a"} Nov 24 11:47:39 crc kubenswrapper[4789]: I1124 11:47:39.325073 4789 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="a99ba3d3-fb7c-4187-b74d-3643fb11d6aa" podUID="06801047-ac5f-4da6-a843-3c064e628c38" Nov 24 11:47:40 crc kubenswrapper[4789]: I1124 11:47:40.181956 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a99ba3d3-fb7c-4187-b74d-3643fb11d6aa" path="/var/lib/kubelet/pods/a99ba3d3-fb7c-4187-b74d-3643fb11d6aa/volumes" Nov 24 11:47:40 crc kubenswrapper[4789]: I1124 11:47:40.954538 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-546dc675b-x2vpf" Nov 24 11:47:40 crc kubenswrapper[4789]: I1124 11:47:40.961195 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-546dc675b-x2vpf" Nov 24 11:47:42 crc kubenswrapper[4789]: I1124 11:47:42.952761 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:47:42 crc kubenswrapper[4789]: I1124 11:47:42.953214 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d7b56404-36d7-44f3-92c3-5835ea030fb1" containerName="ceilometer-central-agent" containerID="cri-o://accfd5d710fec79aeeaf67c9f1d81aa8aaa6cb97c42e2f0ca08d11869b430790" gracePeriod=30 Nov 24 11:47:42 crc kubenswrapper[4789]: I1124 11:47:42.953288 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d7b56404-36d7-44f3-92c3-5835ea030fb1" containerName="proxy-httpd" containerID="cri-o://1b80598968511056049557fe826e7ccc22096cd5dbf273cef5e6de1c68c2c46d" gracePeriod=30 Nov 24 11:47:42 crc kubenswrapper[4789]: I1124 11:47:42.953371 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d7b56404-36d7-44f3-92c3-5835ea030fb1" containerName="ceilometer-notification-agent" containerID="cri-o://7c1df969ee8d865b91d4b105b09acaf1554e3f93c744d47d4dd01d461f842b5c" gracePeriod=30 Nov 24 11:47:42 crc kubenswrapper[4789]: I1124 11:47:42.953418 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d7b56404-36d7-44f3-92c3-5835ea030fb1" containerName="sg-core" containerID="cri-o://80d80b26b6fa32832ca8f39975f78aeed394a370b1c2fdd0aa7cf72a244a01c6" gracePeriod=30 Nov 24 11:47:42 crc kubenswrapper[4789]: I1124 11:47:42.962968 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="d7b56404-36d7-44f3-92c3-5835ea030fb1" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.153:3000/\": EOF" Nov 24 11:47:43 crc kubenswrapper[4789]: I1124 11:47:43.387040 4789 generic.go:334] "Generic (PLEG): container finished" podID="d7b56404-36d7-44f3-92c3-5835ea030fb1" containerID="1b80598968511056049557fe826e7ccc22096cd5dbf273cef5e6de1c68c2c46d" exitCode=0 Nov 24 11:47:43 crc kubenswrapper[4789]: I1124 11:47:43.387322 4789 generic.go:334] "Generic (PLEG): container finished" podID="d7b56404-36d7-44f3-92c3-5835ea030fb1" containerID="80d80b26b6fa32832ca8f39975f78aeed394a370b1c2fdd0aa7cf72a244a01c6" exitCode=2 Nov 24 11:47:43 crc kubenswrapper[4789]: I1124 11:47:43.387332 4789 generic.go:334] "Generic (PLEG): container finished" podID="d7b56404-36d7-44f3-92c3-5835ea030fb1" containerID="accfd5d710fec79aeeaf67c9f1d81aa8aaa6cb97c42e2f0ca08d11869b430790" exitCode=0 Nov 24 11:47:43 crc kubenswrapper[4789]: I1124 11:47:43.387099 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7b56404-36d7-44f3-92c3-5835ea030fb1","Type":"ContainerDied","Data":"1b80598968511056049557fe826e7ccc22096cd5dbf273cef5e6de1c68c2c46d"} Nov 24 11:47:43 crc kubenswrapper[4789]: I1124 11:47:43.387372 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7b56404-36d7-44f3-92c3-5835ea030fb1","Type":"ContainerDied","Data":"80d80b26b6fa32832ca8f39975f78aeed394a370b1c2fdd0aa7cf72a244a01c6"} Nov 24 11:47:43 crc kubenswrapper[4789]: I1124 11:47:43.387387 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7b56404-36d7-44f3-92c3-5835ea030fb1","Type":"ContainerDied","Data":"accfd5d710fec79aeeaf67c9f1d81aa8aaa6cb97c42e2f0ca08d11869b430790"} Nov 24 11:47:43 crc kubenswrapper[4789]: I1124 11:47:43.850142 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 24 11:47:44 crc kubenswrapper[4789]: I1124 11:47:44.401533 4789 generic.go:334] "Generic (PLEG): container finished" podID="d7b56404-36d7-44f3-92c3-5835ea030fb1" containerID="7c1df969ee8d865b91d4b105b09acaf1554e3f93c744d47d4dd01d461f842b5c" exitCode=0 Nov 24 11:47:44 crc kubenswrapper[4789]: I1124 11:47:44.401750 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7b56404-36d7-44f3-92c3-5835ea030fb1","Type":"ContainerDied","Data":"7c1df969ee8d865b91d4b105b09acaf1554e3f93c744d47d4dd01d461f842b5c"} Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.433962 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-dj46k"] Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.435285 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dj46k" Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.442904 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-dj46k"] Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.529282 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-dvvnv"] Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.530254 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dvvnv" Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.554757 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-dvvnv"] Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.576330 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e702aaf2-e5aa-43ca-a668-c743d706ab47-operator-scripts\") pod \"nova-api-db-create-dj46k\" (UID: \"e702aaf2-e5aa-43ca-a668-c743d706ab47\") " pod="openstack/nova-api-db-create-dj46k" Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.576407 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sprzj\" (UniqueName: \"kubernetes.io/projected/e702aaf2-e5aa-43ca-a668-c743d706ab47-kube-api-access-sprzj\") pod \"nova-api-db-create-dj46k\" (UID: \"e702aaf2-e5aa-43ca-a668-c743d706ab47\") " pod="openstack/nova-api-db-create-dj46k" Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.628136 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-8dmfq"] Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.629267 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8dmfq" Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.646710 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-8356-account-create-q8xxw"] Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.647924 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8356-account-create-q8xxw" Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.666788 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.680433 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a6b15a6-5d09-4cb0-ab4e-bd69b568c5e0-operator-scripts\") pod \"nova-cell0-db-create-dvvnv\" (UID: \"3a6b15a6-5d09-4cb0-ab4e-bd69b568c5e0\") " pod="openstack/nova-cell0-db-create-dvvnv" Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.680499 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxp9k\" (UniqueName: \"kubernetes.io/projected/3a6b15a6-5d09-4cb0-ab4e-bd69b568c5e0-kube-api-access-vxp9k\") pod \"nova-cell0-db-create-dvvnv\" (UID: \"3a6b15a6-5d09-4cb0-ab4e-bd69b568c5e0\") " pod="openstack/nova-cell0-db-create-dvvnv" Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.680551 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sprzj\" (UniqueName: \"kubernetes.io/projected/e702aaf2-e5aa-43ca-a668-c743d706ab47-kube-api-access-sprzj\") pod \"nova-api-db-create-dj46k\" (UID: \"e702aaf2-e5aa-43ca-a668-c743d706ab47\") " pod="openstack/nova-api-db-create-dj46k" Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.680656 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e702aaf2-e5aa-43ca-a668-c743d706ab47-operator-scripts\") pod \"nova-api-db-create-dj46k\" (UID: \"e702aaf2-e5aa-43ca-a668-c743d706ab47\") " pod="openstack/nova-api-db-create-dj46k" Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.681341 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e702aaf2-e5aa-43ca-a668-c743d706ab47-operator-scripts\") pod \"nova-api-db-create-dj46k\" (UID: \"e702aaf2-e5aa-43ca-a668-c743d706ab47\") " pod="openstack/nova-api-db-create-dj46k" Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.695293 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-8dmfq"] Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.712498 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sprzj\" (UniqueName: \"kubernetes.io/projected/e702aaf2-e5aa-43ca-a668-c743d706ab47-kube-api-access-sprzj\") pod \"nova-api-db-create-dj46k\" (UID: \"e702aaf2-e5aa-43ca-a668-c743d706ab47\") " pod="openstack/nova-api-db-create-dj46k" Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.712559 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-8356-account-create-q8xxw"] Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.774123 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dj46k" Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.781790 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twkbs\" (UniqueName: \"kubernetes.io/projected/bac1f2fc-bf4e-4b73-b0b4-433b3b38e333-kube-api-access-twkbs\") pod \"nova-api-8356-account-create-q8xxw\" (UID: \"bac1f2fc-bf4e-4b73-b0b4-433b3b38e333\") " pod="openstack/nova-api-8356-account-create-q8xxw" Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.781871 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6d7d899-a03a-4029-8316-b8388df47987-operator-scripts\") pod \"nova-cell1-db-create-8dmfq\" (UID: \"e6d7d899-a03a-4029-8316-b8388df47987\") " pod="openstack/nova-cell1-db-create-8dmfq" Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.781976 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bac1f2fc-bf4e-4b73-b0b4-433b3b38e333-operator-scripts\") pod \"nova-api-8356-account-create-q8xxw\" (UID: \"bac1f2fc-bf4e-4b73-b0b4-433b3b38e333\") " pod="openstack/nova-api-8356-account-create-q8xxw" Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.782035 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a6b15a6-5d09-4cb0-ab4e-bd69b568c5e0-operator-scripts\") pod \"nova-cell0-db-create-dvvnv\" (UID: \"3a6b15a6-5d09-4cb0-ab4e-bd69b568c5e0\") " pod="openstack/nova-cell0-db-create-dvvnv" Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.782081 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kj8x\" (UniqueName: \"kubernetes.io/projected/e6d7d899-a03a-4029-8316-b8388df47987-kube-api-access-9kj8x\") pod \"nova-cell1-db-create-8dmfq\" (UID: \"e6d7d899-a03a-4029-8316-b8388df47987\") " pod="openstack/nova-cell1-db-create-8dmfq" Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.782101 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxp9k\" (UniqueName: \"kubernetes.io/projected/3a6b15a6-5d09-4cb0-ab4e-bd69b568c5e0-kube-api-access-vxp9k\") pod \"nova-cell0-db-create-dvvnv\" (UID: \"3a6b15a6-5d09-4cb0-ab4e-bd69b568c5e0\") " pod="openstack/nova-cell0-db-create-dvvnv" Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.782970 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a6b15a6-5d09-4cb0-ab4e-bd69b568c5e0-operator-scripts\") pod \"nova-cell0-db-create-dvvnv\" (UID: \"3a6b15a6-5d09-4cb0-ab4e-bd69b568c5e0\") " pod="openstack/nova-cell0-db-create-dvvnv" Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.814672 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxp9k\" (UniqueName: \"kubernetes.io/projected/3a6b15a6-5d09-4cb0-ab4e-bd69b568c5e0-kube-api-access-vxp9k\") pod \"nova-cell0-db-create-dvvnv\" (UID: \"3a6b15a6-5d09-4cb0-ab4e-bd69b568c5e0\") " pod="openstack/nova-cell0-db-create-dvvnv" Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.847436 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-6ae9-account-create-nzsmf"] Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.848641 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6ae9-account-create-nzsmf" Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.853346 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.855296 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dvvnv" Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.883319 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6d7d899-a03a-4029-8316-b8388df47987-operator-scripts\") pod \"nova-cell1-db-create-8dmfq\" (UID: \"e6d7d899-a03a-4029-8316-b8388df47987\") " pod="openstack/nova-cell1-db-create-8dmfq" Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.883586 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bac1f2fc-bf4e-4b73-b0b4-433b3b38e333-operator-scripts\") pod \"nova-api-8356-account-create-q8xxw\" (UID: \"bac1f2fc-bf4e-4b73-b0b4-433b3b38e333\") " pod="openstack/nova-api-8356-account-create-q8xxw" Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.884414 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kj8x\" (UniqueName: \"kubernetes.io/projected/e6d7d899-a03a-4029-8316-b8388df47987-kube-api-access-9kj8x\") pod \"nova-cell1-db-create-8dmfq\" (UID: \"e6d7d899-a03a-4029-8316-b8388df47987\") " pod="openstack/nova-cell1-db-create-8dmfq" Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.884305 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bac1f2fc-bf4e-4b73-b0b4-433b3b38e333-operator-scripts\") pod \"nova-api-8356-account-create-q8xxw\" (UID: \"bac1f2fc-bf4e-4b73-b0b4-433b3b38e333\") " pod="openstack/nova-api-8356-account-create-q8xxw" Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.884756 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twkbs\" (UniqueName: \"kubernetes.io/projected/bac1f2fc-bf4e-4b73-b0b4-433b3b38e333-kube-api-access-twkbs\") pod \"nova-api-8356-account-create-q8xxw\" (UID: \"bac1f2fc-bf4e-4b73-b0b4-433b3b38e333\") " pod="openstack/nova-api-8356-account-create-q8xxw" Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.885578 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6d7d899-a03a-4029-8316-b8388df47987-operator-scripts\") pod \"nova-cell1-db-create-8dmfq\" (UID: \"e6d7d899-a03a-4029-8316-b8388df47987\") " pod="openstack/nova-cell1-db-create-8dmfq" Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.926508 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-6ae9-account-create-nzsmf"] Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.944660 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twkbs\" (UniqueName: \"kubernetes.io/projected/bac1f2fc-bf4e-4b73-b0b4-433b3b38e333-kube-api-access-twkbs\") pod \"nova-api-8356-account-create-q8xxw\" (UID: \"bac1f2fc-bf4e-4b73-b0b4-433b3b38e333\") " pod="openstack/nova-api-8356-account-create-q8xxw" Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.950017 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kj8x\" (UniqueName: \"kubernetes.io/projected/e6d7d899-a03a-4029-8316-b8388df47987-kube-api-access-9kj8x\") pod \"nova-cell1-db-create-8dmfq\" (UID: \"e6d7d899-a03a-4029-8316-b8388df47987\") " pod="openstack/nova-cell1-db-create-8dmfq" Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.983138 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8356-account-create-q8xxw" Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.985976 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rwqf\" (UniqueName: \"kubernetes.io/projected/ad86d851-a1c9-47b6-9f94-28176e2c1e85-kube-api-access-6rwqf\") pod \"nova-cell0-6ae9-account-create-nzsmf\" (UID: \"ad86d851-a1c9-47b6-9f94-28176e2c1e85\") " pod="openstack/nova-cell0-6ae9-account-create-nzsmf" Nov 24 11:47:46 crc kubenswrapper[4789]: I1124 11:47:46.986045 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad86d851-a1c9-47b6-9f94-28176e2c1e85-operator-scripts\") pod \"nova-cell0-6ae9-account-create-nzsmf\" (UID: \"ad86d851-a1c9-47b6-9f94-28176e2c1e85\") " pod="openstack/nova-cell0-6ae9-account-create-nzsmf" Nov 24 11:47:47 crc kubenswrapper[4789]: I1124 11:47:47.037819 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-c151-account-create-2lvng"] Nov 24 11:47:47 crc kubenswrapper[4789]: I1124 11:47:47.039004 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c151-account-create-2lvng" Nov 24 11:47:47 crc kubenswrapper[4789]: I1124 11:47:47.046360 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 24 11:47:47 crc kubenswrapper[4789]: I1124 11:47:47.094907 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad86d851-a1c9-47b6-9f94-28176e2c1e85-operator-scripts\") pod \"nova-cell0-6ae9-account-create-nzsmf\" (UID: \"ad86d851-a1c9-47b6-9f94-28176e2c1e85\") " pod="openstack/nova-cell0-6ae9-account-create-nzsmf" Nov 24 11:47:47 crc kubenswrapper[4789]: I1124 11:47:47.095101 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rwqf\" (UniqueName: \"kubernetes.io/projected/ad86d851-a1c9-47b6-9f94-28176e2c1e85-kube-api-access-6rwqf\") pod \"nova-cell0-6ae9-account-create-nzsmf\" (UID: \"ad86d851-a1c9-47b6-9f94-28176e2c1e85\") " pod="openstack/nova-cell0-6ae9-account-create-nzsmf" Nov 24 11:47:47 crc kubenswrapper[4789]: I1124 11:47:47.096471 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad86d851-a1c9-47b6-9f94-28176e2c1e85-operator-scripts\") pod \"nova-cell0-6ae9-account-create-nzsmf\" (UID: \"ad86d851-a1c9-47b6-9f94-28176e2c1e85\") " pod="openstack/nova-cell0-6ae9-account-create-nzsmf" Nov 24 11:47:47 crc kubenswrapper[4789]: I1124 11:47:47.109237 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-c151-account-create-2lvng"] Nov 24 11:47:47 crc kubenswrapper[4789]: I1124 11:47:47.153703 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rwqf\" (UniqueName: \"kubernetes.io/projected/ad86d851-a1c9-47b6-9f94-28176e2c1e85-kube-api-access-6rwqf\") pod \"nova-cell0-6ae9-account-create-nzsmf\" (UID: \"ad86d851-a1c9-47b6-9f94-28176e2c1e85\") " pod="openstack/nova-cell0-6ae9-account-create-nzsmf" Nov 24 11:47:47 crc kubenswrapper[4789]: I1124 11:47:47.187446 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6ae9-account-create-nzsmf" Nov 24 11:47:47 crc kubenswrapper[4789]: I1124 11:47:47.196587 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klvvq\" (UniqueName: \"kubernetes.io/projected/ec55afb7-d18a-449e-b32b-859da8cb7d47-kube-api-access-klvvq\") pod \"nova-cell1-c151-account-create-2lvng\" (UID: \"ec55afb7-d18a-449e-b32b-859da8cb7d47\") " pod="openstack/nova-cell1-c151-account-create-2lvng" Nov 24 11:47:47 crc kubenswrapper[4789]: I1124 11:47:47.196650 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec55afb7-d18a-449e-b32b-859da8cb7d47-operator-scripts\") pod \"nova-cell1-c151-account-create-2lvng\" (UID: \"ec55afb7-d18a-449e-b32b-859da8cb7d47\") " pod="openstack/nova-cell1-c151-account-create-2lvng" Nov 24 11:47:47 crc kubenswrapper[4789]: I1124 11:47:47.250232 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8dmfq" Nov 24 11:47:47 crc kubenswrapper[4789]: I1124 11:47:47.299543 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klvvq\" (UniqueName: \"kubernetes.io/projected/ec55afb7-d18a-449e-b32b-859da8cb7d47-kube-api-access-klvvq\") pod \"nova-cell1-c151-account-create-2lvng\" (UID: \"ec55afb7-d18a-449e-b32b-859da8cb7d47\") " pod="openstack/nova-cell1-c151-account-create-2lvng" Nov 24 11:47:47 crc kubenswrapper[4789]: I1124 11:47:47.299608 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec55afb7-d18a-449e-b32b-859da8cb7d47-operator-scripts\") pod \"nova-cell1-c151-account-create-2lvng\" (UID: \"ec55afb7-d18a-449e-b32b-859da8cb7d47\") " pod="openstack/nova-cell1-c151-account-create-2lvng" Nov 24 11:47:47 crc kubenswrapper[4789]: I1124 11:47:47.303178 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec55afb7-d18a-449e-b32b-859da8cb7d47-operator-scripts\") pod \"nova-cell1-c151-account-create-2lvng\" (UID: \"ec55afb7-d18a-449e-b32b-859da8cb7d47\") " pod="openstack/nova-cell1-c151-account-create-2lvng" Nov 24 11:47:47 crc kubenswrapper[4789]: I1124 11:47:47.317093 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klvvq\" (UniqueName: \"kubernetes.io/projected/ec55afb7-d18a-449e-b32b-859da8cb7d47-kube-api-access-klvvq\") pod \"nova-cell1-c151-account-create-2lvng\" (UID: \"ec55afb7-d18a-449e-b32b-859da8cb7d47\") " pod="openstack/nova-cell1-c151-account-create-2lvng" Nov 24 11:47:47 crc kubenswrapper[4789]: I1124 11:47:47.394844 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c151-account-create-2lvng" Nov 24 11:47:50 crc kubenswrapper[4789]: I1124 11:47:50.163032 4789 patch_prober.go:28] interesting pod/machine-config-daemon-9czvn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:47:50 crc kubenswrapper[4789]: I1124 11:47:50.163420 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:47:50 crc kubenswrapper[4789]: I1124 11:47:50.594153 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:47:50 crc kubenswrapper[4789]: I1124 11:47:50.765711 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7b56404-36d7-44f3-92c3-5835ea030fb1-log-httpd\") pod \"d7b56404-36d7-44f3-92c3-5835ea030fb1\" (UID: \"d7b56404-36d7-44f3-92c3-5835ea030fb1\") " Nov 24 11:47:50 crc kubenswrapper[4789]: I1124 11:47:50.765767 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7b56404-36d7-44f3-92c3-5835ea030fb1-sg-core-conf-yaml\") pod \"d7b56404-36d7-44f3-92c3-5835ea030fb1\" (UID: \"d7b56404-36d7-44f3-92c3-5835ea030fb1\") " Nov 24 11:47:50 crc kubenswrapper[4789]: I1124 11:47:50.765793 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7b56404-36d7-44f3-92c3-5835ea030fb1-combined-ca-bundle\") pod \"d7b56404-36d7-44f3-92c3-5835ea030fb1\" (UID: \"d7b56404-36d7-44f3-92c3-5835ea030fb1\") " Nov 24 11:47:50 crc kubenswrapper[4789]: I1124 11:47:50.765879 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7b56404-36d7-44f3-92c3-5835ea030fb1-run-httpd\") pod \"d7b56404-36d7-44f3-92c3-5835ea030fb1\" (UID: \"d7b56404-36d7-44f3-92c3-5835ea030fb1\") " Nov 24 11:47:50 crc kubenswrapper[4789]: I1124 11:47:50.765896 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7b56404-36d7-44f3-92c3-5835ea030fb1-scripts\") pod \"d7b56404-36d7-44f3-92c3-5835ea030fb1\" (UID: \"d7b56404-36d7-44f3-92c3-5835ea030fb1\") " Nov 24 11:47:50 crc kubenswrapper[4789]: I1124 11:47:50.765965 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc8rs\" (UniqueName: \"kubernetes.io/projected/d7b56404-36d7-44f3-92c3-5835ea030fb1-kube-api-access-cc8rs\") pod \"d7b56404-36d7-44f3-92c3-5835ea030fb1\" (UID: \"d7b56404-36d7-44f3-92c3-5835ea030fb1\") " Nov 24 11:47:50 crc kubenswrapper[4789]: I1124 11:47:50.766041 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7b56404-36d7-44f3-92c3-5835ea030fb1-config-data\") pod \"d7b56404-36d7-44f3-92c3-5835ea030fb1\" (UID: \"d7b56404-36d7-44f3-92c3-5835ea030fb1\") " Nov 24 11:47:50 crc kubenswrapper[4789]: I1124 11:47:50.766361 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7b56404-36d7-44f3-92c3-5835ea030fb1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d7b56404-36d7-44f3-92c3-5835ea030fb1" (UID: "d7b56404-36d7-44f3-92c3-5835ea030fb1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:47:50 crc kubenswrapper[4789]: I1124 11:47:50.766706 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7b56404-36d7-44f3-92c3-5835ea030fb1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d7b56404-36d7-44f3-92c3-5835ea030fb1" (UID: "d7b56404-36d7-44f3-92c3-5835ea030fb1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:47:50 crc kubenswrapper[4789]: I1124 11:47:50.770973 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7b56404-36d7-44f3-92c3-5835ea030fb1-scripts" (OuterVolumeSpecName: "scripts") pod "d7b56404-36d7-44f3-92c3-5835ea030fb1" (UID: "d7b56404-36d7-44f3-92c3-5835ea030fb1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:50 crc kubenswrapper[4789]: I1124 11:47:50.772579 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7b56404-36d7-44f3-92c3-5835ea030fb1-kube-api-access-cc8rs" (OuterVolumeSpecName: "kube-api-access-cc8rs") pod "d7b56404-36d7-44f3-92c3-5835ea030fb1" (UID: "d7b56404-36d7-44f3-92c3-5835ea030fb1"). InnerVolumeSpecName "kube-api-access-cc8rs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:47:50 crc kubenswrapper[4789]: I1124 11:47:50.794712 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7b56404-36d7-44f3-92c3-5835ea030fb1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d7b56404-36d7-44f3-92c3-5835ea030fb1" (UID: "d7b56404-36d7-44f3-92c3-5835ea030fb1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:50 crc kubenswrapper[4789]: I1124 11:47:50.853351 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7b56404-36d7-44f3-92c3-5835ea030fb1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7b56404-36d7-44f3-92c3-5835ea030fb1" (UID: "d7b56404-36d7-44f3-92c3-5835ea030fb1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:50 crc kubenswrapper[4789]: I1124 11:47:50.868066 4789 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7b56404-36d7-44f3-92c3-5835ea030fb1-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:50 crc kubenswrapper[4789]: I1124 11:47:50.868096 4789 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7b56404-36d7-44f3-92c3-5835ea030fb1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:50 crc kubenswrapper[4789]: I1124 11:47:50.868108 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7b56404-36d7-44f3-92c3-5835ea030fb1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:50 crc kubenswrapper[4789]: I1124 11:47:50.868119 4789 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7b56404-36d7-44f3-92c3-5835ea030fb1-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:50 crc kubenswrapper[4789]: I1124 11:47:50.868133 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7b56404-36d7-44f3-92c3-5835ea030fb1-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:50 crc kubenswrapper[4789]: I1124 11:47:50.868141 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cc8rs\" (UniqueName: \"kubernetes.io/projected/d7b56404-36d7-44f3-92c3-5835ea030fb1-kube-api-access-cc8rs\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:50 crc kubenswrapper[4789]: I1124 11:47:50.894930 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7b56404-36d7-44f3-92c3-5835ea030fb1-config-data" (OuterVolumeSpecName: "config-data") pod "d7b56404-36d7-44f3-92c3-5835ea030fb1" (UID: "d7b56404-36d7-44f3-92c3-5835ea030fb1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:50 crc kubenswrapper[4789]: W1124 11:47:50.945608 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6d7d899_a03a_4029_8316_b8388df47987.slice/crio-0b29e4862cacdbac21ad3b033f1d4c4c346b8378014d0e173e8ed829ccf92098 WatchSource:0}: Error finding container 0b29e4862cacdbac21ad3b033f1d4c4c346b8378014d0e173e8ed829ccf92098: Status 404 returned error can't find the container with id 0b29e4862cacdbac21ad3b033f1d4c4c346b8378014d0e173e8ed829ccf92098 Nov 24 11:47:50 crc kubenswrapper[4789]: I1124 11:47:50.946692 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-8dmfq"] Nov 24 11:47:50 crc kubenswrapper[4789]: I1124 11:47:50.970224 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7b56404-36d7-44f3-92c3-5835ea030fb1-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:50 crc kubenswrapper[4789]: I1124 11:47:50.974180 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-dvvnv"] Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.144297 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-c151-account-create-2lvng"] Nov 24 11:47:51 crc kubenswrapper[4789]: W1124 11:47:51.151895 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec55afb7_d18a_449e_b32b_859da8cb7d47.slice/crio-533c09584146ee1885bf2f3fc9fa5442c580f55beb8a29f1a98576d107f22349 WatchSource:0}: Error finding container 533c09584146ee1885bf2f3fc9fa5442c580f55beb8a29f1a98576d107f22349: Status 404 returned error can't find the container with id 533c09584146ee1885bf2f3fc9fa5442c580f55beb8a29f1a98576d107f22349 Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.166664 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-6ae9-account-create-nzsmf"] Nov 24 11:47:51 crc kubenswrapper[4789]: W1124 11:47:51.175305 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad86d851_a1c9_47b6_9f94_28176e2c1e85.slice/crio-da5f15b1b5ae99b0d7a5cb24be69bad148f0e6c845ae7a11c62d8e7f98eb7856 WatchSource:0}: Error finding container da5f15b1b5ae99b0d7a5cb24be69bad148f0e6c845ae7a11c62d8e7f98eb7856: Status 404 returned error can't find the container with id da5f15b1b5ae99b0d7a5cb24be69bad148f0e6c845ae7a11c62d8e7f98eb7856 Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.183666 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-dj46k"] Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.211976 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-8356-account-create-q8xxw"] Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.488302 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8356-account-create-q8xxw" event={"ID":"bac1f2fc-bf4e-4b73-b0b4-433b3b38e333","Type":"ContainerStarted","Data":"b683ed8320ad32d8bffbd6176d2130a09074339a92a1114b8ab6c8c3e060d41e"} Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.490130 4789 generic.go:334] "Generic (PLEG): container finished" podID="3a6b15a6-5d09-4cb0-ab4e-bd69b568c5e0" containerID="430999e109cfa99f065b4964caeaca483ae34c75c305b6d26a5ddb940a8b005a" exitCode=0 Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.490174 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dvvnv" event={"ID":"3a6b15a6-5d09-4cb0-ab4e-bd69b568c5e0","Type":"ContainerDied","Data":"430999e109cfa99f065b4964caeaca483ae34c75c305b6d26a5ddb940a8b005a"} Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.490230 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dvvnv" event={"ID":"3a6b15a6-5d09-4cb0-ab4e-bd69b568c5e0","Type":"ContainerStarted","Data":"ce889dc6a5dea02b7c84e58fa8761fa22b148f5446f7c21c535c950483ad4004"} Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.492542 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dj46k" event={"ID":"e702aaf2-e5aa-43ca-a668-c743d706ab47","Type":"ContainerStarted","Data":"c976953a52a0937effd219c0b8bae0843a03039027a2f61e58678af40a4558a0"} Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.493787 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c151-account-create-2lvng" event={"ID":"ec55afb7-d18a-449e-b32b-859da8cb7d47","Type":"ContainerStarted","Data":"533c09584146ee1885bf2f3fc9fa5442c580f55beb8a29f1a98576d107f22349"} Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.495147 4789 generic.go:334] "Generic (PLEG): container finished" podID="e6d7d899-a03a-4029-8316-b8388df47987" containerID="5b8a9d6a9cb38c1d833a4e5b7a464144c861ff338e168661c05ac27df1331b7f" exitCode=0 Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.495183 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8dmfq" event={"ID":"e6d7d899-a03a-4029-8316-b8388df47987","Type":"ContainerDied","Data":"5b8a9d6a9cb38c1d833a4e5b7a464144c861ff338e168661c05ac27df1331b7f"} Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.495197 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8dmfq" event={"ID":"e6d7d899-a03a-4029-8316-b8388df47987","Type":"ContainerStarted","Data":"0b29e4862cacdbac21ad3b033f1d4c4c346b8378014d0e173e8ed829ccf92098"} Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.496400 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"06801047-ac5f-4da6-a843-3c064e628c38","Type":"ContainerStarted","Data":"80d893e18b47fe1d9ead0af8a5bab188a8677098b573241660d6ff19350fa6fa"} Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.498072 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-6ae9-account-create-nzsmf" event={"ID":"ad86d851-a1c9-47b6-9f94-28176e2c1e85","Type":"ContainerStarted","Data":"da5f15b1b5ae99b0d7a5cb24be69bad148f0e6c845ae7a11c62d8e7f98eb7856"} Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.500022 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7b56404-36d7-44f3-92c3-5835ea030fb1","Type":"ContainerDied","Data":"8f585c955d978cdc54140bd5f88a77a83f6555b0646ca71f861c2b5e17fdc4bb"} Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.500051 4789 scope.go:117] "RemoveContainer" containerID="1b80598968511056049557fe826e7ccc22096cd5dbf273cef5e6de1c68c2c46d" Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.500152 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.538588 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.986891629 podStartE2EDuration="13.538563754s" podCreationTimestamp="2025-11-24 11:47:38 +0000 UTC" firstStartedPulling="2025-11-24 11:47:38.92950229 +0000 UTC m=+1041.511973669" lastFinishedPulling="2025-11-24 11:47:50.481174415 +0000 UTC m=+1053.063645794" observedRunningTime="2025-11-24 11:47:51.533639495 +0000 UTC m=+1054.116110874" watchObservedRunningTime="2025-11-24 11:47:51.538563754 +0000 UTC m=+1054.121035133" Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.540636 4789 scope.go:117] "RemoveContainer" containerID="80d80b26b6fa32832ca8f39975f78aeed394a370b1c2fdd0aa7cf72a244a01c6" Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.586438 4789 scope.go:117] "RemoveContainer" containerID="7c1df969ee8d865b91d4b105b09acaf1554e3f93c744d47d4dd01d461f842b5c" Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.611752 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.629896 4789 scope.go:117] "RemoveContainer" containerID="accfd5d710fec79aeeaf67c9f1d81aa8aaa6cb97c42e2f0ca08d11869b430790" Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.634241 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.656781 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:47:51 crc kubenswrapper[4789]: E1124 11:47:51.657298 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7b56404-36d7-44f3-92c3-5835ea030fb1" containerName="sg-core" Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.657376 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7b56404-36d7-44f3-92c3-5835ea030fb1" containerName="sg-core" Nov 24 11:47:51 crc kubenswrapper[4789]: E1124 11:47:51.657479 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7b56404-36d7-44f3-92c3-5835ea030fb1" containerName="ceilometer-notification-agent" Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.657532 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7b56404-36d7-44f3-92c3-5835ea030fb1" containerName="ceilometer-notification-agent" Nov 24 11:47:51 crc kubenswrapper[4789]: E1124 11:47:51.657588 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7b56404-36d7-44f3-92c3-5835ea030fb1" containerName="ceilometer-central-agent" Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.658336 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7b56404-36d7-44f3-92c3-5835ea030fb1" containerName="ceilometer-central-agent" Nov 24 11:47:51 crc kubenswrapper[4789]: E1124 11:47:51.658414 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7b56404-36d7-44f3-92c3-5835ea030fb1" containerName="proxy-httpd" Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.658475 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7b56404-36d7-44f3-92c3-5835ea030fb1" containerName="proxy-httpd" Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.658699 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7b56404-36d7-44f3-92c3-5835ea030fb1" containerName="proxy-httpd" Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.658763 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7b56404-36d7-44f3-92c3-5835ea030fb1" containerName="ceilometer-notification-agent" Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.658817 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7b56404-36d7-44f3-92c3-5835ea030fb1" containerName="ceilometer-central-agent" Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.658873 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7b56404-36d7-44f3-92c3-5835ea030fb1" containerName="sg-core" Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.663090 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.664766 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.665195 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.669115 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.787057 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79e21ee2-c69a-4744-a817-50101e626dac-run-httpd\") pod \"ceilometer-0\" (UID: \"79e21ee2-c69a-4744-a817-50101e626dac\") " pod="openstack/ceilometer-0" Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.787127 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79e21ee2-c69a-4744-a817-50101e626dac-config-data\") pod \"ceilometer-0\" (UID: \"79e21ee2-c69a-4744-a817-50101e626dac\") " pod="openstack/ceilometer-0" Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.787146 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/79e21ee2-c69a-4744-a817-50101e626dac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"79e21ee2-c69a-4744-a817-50101e626dac\") " pod="openstack/ceilometer-0" Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.787180 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79e21ee2-c69a-4744-a817-50101e626dac-scripts\") pod \"ceilometer-0\" (UID: \"79e21ee2-c69a-4744-a817-50101e626dac\") " pod="openstack/ceilometer-0" Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.787220 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e21ee2-c69a-4744-a817-50101e626dac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"79e21ee2-c69a-4744-a817-50101e626dac\") " pod="openstack/ceilometer-0" Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.787237 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjjlf\" (UniqueName: \"kubernetes.io/projected/79e21ee2-c69a-4744-a817-50101e626dac-kube-api-access-pjjlf\") pod \"ceilometer-0\" (UID: \"79e21ee2-c69a-4744-a817-50101e626dac\") " pod="openstack/ceilometer-0" Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.787263 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79e21ee2-c69a-4744-a817-50101e626dac-log-httpd\") pod \"ceilometer-0\" (UID: \"79e21ee2-c69a-4744-a817-50101e626dac\") " pod="openstack/ceilometer-0" Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.889246 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79e21ee2-c69a-4744-a817-50101e626dac-config-data\") pod \"ceilometer-0\" (UID: \"79e21ee2-c69a-4744-a817-50101e626dac\") " pod="openstack/ceilometer-0" Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.889289 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/79e21ee2-c69a-4744-a817-50101e626dac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"79e21ee2-c69a-4744-a817-50101e626dac\") " pod="openstack/ceilometer-0" Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.889335 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79e21ee2-c69a-4744-a817-50101e626dac-scripts\") pod \"ceilometer-0\" (UID: \"79e21ee2-c69a-4744-a817-50101e626dac\") " pod="openstack/ceilometer-0" Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.889381 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e21ee2-c69a-4744-a817-50101e626dac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"79e21ee2-c69a-4744-a817-50101e626dac\") " pod="openstack/ceilometer-0" Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.889397 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjjlf\" (UniqueName: \"kubernetes.io/projected/79e21ee2-c69a-4744-a817-50101e626dac-kube-api-access-pjjlf\") pod \"ceilometer-0\" (UID: \"79e21ee2-c69a-4744-a817-50101e626dac\") " pod="openstack/ceilometer-0" Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.889426 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79e21ee2-c69a-4744-a817-50101e626dac-log-httpd\") pod \"ceilometer-0\" (UID: \"79e21ee2-c69a-4744-a817-50101e626dac\") " pod="openstack/ceilometer-0" Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.889502 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79e21ee2-c69a-4744-a817-50101e626dac-run-httpd\") pod \"ceilometer-0\" (UID: \"79e21ee2-c69a-4744-a817-50101e626dac\") " pod="openstack/ceilometer-0" Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.889979 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79e21ee2-c69a-4744-a817-50101e626dac-run-httpd\") pod \"ceilometer-0\" (UID: \"79e21ee2-c69a-4744-a817-50101e626dac\") " pod="openstack/ceilometer-0" Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.890538 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79e21ee2-c69a-4744-a817-50101e626dac-log-httpd\") pod \"ceilometer-0\" (UID: \"79e21ee2-c69a-4744-a817-50101e626dac\") " pod="openstack/ceilometer-0" Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.894724 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79e21ee2-c69a-4744-a817-50101e626dac-config-data\") pod \"ceilometer-0\" (UID: \"79e21ee2-c69a-4744-a817-50101e626dac\") " pod="openstack/ceilometer-0" Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.895788 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79e21ee2-c69a-4744-a817-50101e626dac-scripts\") pod \"ceilometer-0\" (UID: \"79e21ee2-c69a-4744-a817-50101e626dac\") " pod="openstack/ceilometer-0" Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.908870 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/79e21ee2-c69a-4744-a817-50101e626dac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"79e21ee2-c69a-4744-a817-50101e626dac\") " pod="openstack/ceilometer-0" Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.910158 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e21ee2-c69a-4744-a817-50101e626dac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"79e21ee2-c69a-4744-a817-50101e626dac\") " pod="openstack/ceilometer-0" Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.913078 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjjlf\" (UniqueName: \"kubernetes.io/projected/79e21ee2-c69a-4744-a817-50101e626dac-kube-api-access-pjjlf\") pod \"ceilometer-0\" (UID: \"79e21ee2-c69a-4744-a817-50101e626dac\") " pod="openstack/ceilometer-0" Nov 24 11:47:51 crc kubenswrapper[4789]: I1124 11:47:51.988025 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:47:52 crc kubenswrapper[4789]: I1124 11:47:52.209951 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7b56404-36d7-44f3-92c3-5835ea030fb1" path="/var/lib/kubelet/pods/d7b56404-36d7-44f3-92c3-5835ea030fb1/volumes" Nov 24 11:47:52 crc kubenswrapper[4789]: I1124 11:47:52.511952 4789 generic.go:334] "Generic (PLEG): container finished" podID="ec55afb7-d18a-449e-b32b-859da8cb7d47" containerID="1fd4a3a5294bdb4788f22fa1547442f21a7fa5abd54ae995670fbbabbbd44473" exitCode=0 Nov 24 11:47:52 crc kubenswrapper[4789]: I1124 11:47:52.512034 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c151-account-create-2lvng" event={"ID":"ec55afb7-d18a-449e-b32b-859da8cb7d47","Type":"ContainerDied","Data":"1fd4a3a5294bdb4788f22fa1547442f21a7fa5abd54ae995670fbbabbbd44473"} Nov 24 11:47:52 crc kubenswrapper[4789]: I1124 11:47:52.515221 4789 generic.go:334] "Generic (PLEG): container finished" podID="ad86d851-a1c9-47b6-9f94-28176e2c1e85" containerID="3eaa98b25524096365625b6eba64b0bbd0efbcded1e676ef6926dfa22f0d7bab" exitCode=0 Nov 24 11:47:52 crc kubenswrapper[4789]: I1124 11:47:52.515290 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-6ae9-account-create-nzsmf" event={"ID":"ad86d851-a1c9-47b6-9f94-28176e2c1e85","Type":"ContainerDied","Data":"3eaa98b25524096365625b6eba64b0bbd0efbcded1e676ef6926dfa22f0d7bab"} Nov 24 11:47:52 crc kubenswrapper[4789]: I1124 11:47:52.518394 4789 generic.go:334] "Generic (PLEG): container finished" podID="bac1f2fc-bf4e-4b73-b0b4-433b3b38e333" containerID="5aa34a057cb5265feaeaacc2a45c1a5d548c12692ed9489542c07983a1e42832" exitCode=0 Nov 24 11:47:52 crc kubenswrapper[4789]: I1124 11:47:52.518529 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8356-account-create-q8xxw" event={"ID":"bac1f2fc-bf4e-4b73-b0b4-433b3b38e333","Type":"ContainerDied","Data":"5aa34a057cb5265feaeaacc2a45c1a5d548c12692ed9489542c07983a1e42832"} Nov 24 11:47:52 crc kubenswrapper[4789]: I1124 11:47:52.521766 4789 generic.go:334] "Generic (PLEG): container finished" podID="e702aaf2-e5aa-43ca-a668-c743d706ab47" containerID="4186b4087a8527398165c4386aa97342d1f4a32d3343dbf59ce2c3dd2b5e5b95" exitCode=0 Nov 24 11:47:52 crc kubenswrapper[4789]: I1124 11:47:52.522150 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dj46k" event={"ID":"e702aaf2-e5aa-43ca-a668-c743d706ab47","Type":"ContainerDied","Data":"4186b4087a8527398165c4386aa97342d1f4a32d3343dbf59ce2c3dd2b5e5b95"} Nov 24 11:47:52 crc kubenswrapper[4789]: I1124 11:47:52.640229 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:47:52 crc kubenswrapper[4789]: I1124 11:47:52.974822 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dvvnv" Nov 24 11:47:52 crc kubenswrapper[4789]: I1124 11:47:52.981852 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8dmfq" Nov 24 11:47:53 crc kubenswrapper[4789]: I1124 11:47:53.083775 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:47:53 crc kubenswrapper[4789]: I1124 11:47:53.120843 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxp9k\" (UniqueName: \"kubernetes.io/projected/3a6b15a6-5d09-4cb0-ab4e-bd69b568c5e0-kube-api-access-vxp9k\") pod \"3a6b15a6-5d09-4cb0-ab4e-bd69b568c5e0\" (UID: \"3a6b15a6-5d09-4cb0-ab4e-bd69b568c5e0\") " Nov 24 11:47:53 crc kubenswrapper[4789]: I1124 11:47:53.121246 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a6b15a6-5d09-4cb0-ab4e-bd69b568c5e0-operator-scripts\") pod \"3a6b15a6-5d09-4cb0-ab4e-bd69b568c5e0\" (UID: \"3a6b15a6-5d09-4cb0-ab4e-bd69b568c5e0\") " Nov 24 11:47:53 crc kubenswrapper[4789]: I1124 11:47:53.121438 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kj8x\" (UniqueName: \"kubernetes.io/projected/e6d7d899-a03a-4029-8316-b8388df47987-kube-api-access-9kj8x\") pod \"e6d7d899-a03a-4029-8316-b8388df47987\" (UID: \"e6d7d899-a03a-4029-8316-b8388df47987\") " Nov 24 11:47:53 crc kubenswrapper[4789]: I1124 11:47:53.121498 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6d7d899-a03a-4029-8316-b8388df47987-operator-scripts\") pod \"e6d7d899-a03a-4029-8316-b8388df47987\" (UID: \"e6d7d899-a03a-4029-8316-b8388df47987\") " Nov 24 11:47:53 crc kubenswrapper[4789]: I1124 11:47:53.121793 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a6b15a6-5d09-4cb0-ab4e-bd69b568c5e0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3a6b15a6-5d09-4cb0-ab4e-bd69b568c5e0" (UID: "3a6b15a6-5d09-4cb0-ab4e-bd69b568c5e0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:47:53 crc kubenswrapper[4789]: I1124 11:47:53.121994 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6d7d899-a03a-4029-8316-b8388df47987-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e6d7d899-a03a-4029-8316-b8388df47987" (UID: "e6d7d899-a03a-4029-8316-b8388df47987"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:47:53 crc kubenswrapper[4789]: I1124 11:47:53.124895 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a6b15a6-5d09-4cb0-ab4e-bd69b568c5e0-kube-api-access-vxp9k" (OuterVolumeSpecName: "kube-api-access-vxp9k") pod "3a6b15a6-5d09-4cb0-ab4e-bd69b568c5e0" (UID: "3a6b15a6-5d09-4cb0-ab4e-bd69b568c5e0"). InnerVolumeSpecName "kube-api-access-vxp9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:47:53 crc kubenswrapper[4789]: I1124 11:47:53.125536 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6d7d899-a03a-4029-8316-b8388df47987-kube-api-access-9kj8x" (OuterVolumeSpecName: "kube-api-access-9kj8x") pod "e6d7d899-a03a-4029-8316-b8388df47987" (UID: "e6d7d899-a03a-4029-8316-b8388df47987"). InnerVolumeSpecName "kube-api-access-9kj8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:47:53 crc kubenswrapper[4789]: I1124 11:47:53.223009 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kj8x\" (UniqueName: \"kubernetes.io/projected/e6d7d899-a03a-4029-8316-b8388df47987-kube-api-access-9kj8x\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:53 crc kubenswrapper[4789]: I1124 11:47:53.223048 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6d7d899-a03a-4029-8316-b8388df47987-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:53 crc kubenswrapper[4789]: I1124 11:47:53.223058 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxp9k\" (UniqueName: \"kubernetes.io/projected/3a6b15a6-5d09-4cb0-ab4e-bd69b568c5e0-kube-api-access-vxp9k\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:53 crc kubenswrapper[4789]: I1124 11:47:53.223066 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a6b15a6-5d09-4cb0-ab4e-bd69b568c5e0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:53 crc kubenswrapper[4789]: I1124 11:47:53.536934 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8dmfq" event={"ID":"e6d7d899-a03a-4029-8316-b8388df47987","Type":"ContainerDied","Data":"0b29e4862cacdbac21ad3b033f1d4c4c346b8378014d0e173e8ed829ccf92098"} Nov 24 11:47:53 crc kubenswrapper[4789]: I1124 11:47:53.536973 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b29e4862cacdbac21ad3b033f1d4c4c346b8378014d0e173e8ed829ccf92098" Nov 24 11:47:53 crc kubenswrapper[4789]: I1124 11:47:53.536982 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8dmfq" Nov 24 11:47:53 crc kubenswrapper[4789]: I1124 11:47:53.538148 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"79e21ee2-c69a-4744-a817-50101e626dac","Type":"ContainerStarted","Data":"fcc3711273bf11a0edf0679fa48c65c0309e7ad0997e7b2295803810ca491ecd"} Nov 24 11:47:53 crc kubenswrapper[4789]: I1124 11:47:53.538169 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"79e21ee2-c69a-4744-a817-50101e626dac","Type":"ContainerStarted","Data":"eec11d348b1955c8a75c52efc64a213492f7303132a8d6d6530293ce4b7eadc5"} Nov 24 11:47:53 crc kubenswrapper[4789]: I1124 11:47:53.539374 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dvvnv" Nov 24 11:47:53 crc kubenswrapper[4789]: I1124 11:47:53.539413 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dvvnv" event={"ID":"3a6b15a6-5d09-4cb0-ab4e-bd69b568c5e0","Type":"ContainerDied","Data":"ce889dc6a5dea02b7c84e58fa8761fa22b148f5446f7c21c535c950483ad4004"} Nov 24 11:47:53 crc kubenswrapper[4789]: I1124 11:47:53.539427 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce889dc6a5dea02b7c84e58fa8761fa22b148f5446f7c21c535c950483ad4004" Nov 24 11:47:53 crc kubenswrapper[4789]: I1124 11:47:53.982621 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dj46k" Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.073335 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6ae9-account-create-nzsmf" Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.092952 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c151-account-create-2lvng" Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.118493 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8356-account-create-q8xxw" Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.156963 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad86d851-a1c9-47b6-9f94-28176e2c1e85-operator-scripts\") pod \"ad86d851-a1c9-47b6-9f94-28176e2c1e85\" (UID: \"ad86d851-a1c9-47b6-9f94-28176e2c1e85\") " Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.157699 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rwqf\" (UniqueName: \"kubernetes.io/projected/ad86d851-a1c9-47b6-9f94-28176e2c1e85-kube-api-access-6rwqf\") pod \"ad86d851-a1c9-47b6-9f94-28176e2c1e85\" (UID: \"ad86d851-a1c9-47b6-9f94-28176e2c1e85\") " Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.157818 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e702aaf2-e5aa-43ca-a668-c743d706ab47-operator-scripts\") pod \"e702aaf2-e5aa-43ca-a668-c743d706ab47\" (UID: \"e702aaf2-e5aa-43ca-a668-c743d706ab47\") " Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.157840 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sprzj\" (UniqueName: \"kubernetes.io/projected/e702aaf2-e5aa-43ca-a668-c743d706ab47-kube-api-access-sprzj\") pod \"e702aaf2-e5aa-43ca-a668-c743d706ab47\" (UID: \"e702aaf2-e5aa-43ca-a668-c743d706ab47\") " Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.157400 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad86d851-a1c9-47b6-9f94-28176e2c1e85-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ad86d851-a1c9-47b6-9f94-28176e2c1e85" (UID: "ad86d851-a1c9-47b6-9f94-28176e2c1e85"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.164105 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e702aaf2-e5aa-43ca-a668-c743d706ab47-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e702aaf2-e5aa-43ca-a668-c743d706ab47" (UID: "e702aaf2-e5aa-43ca-a668-c743d706ab47"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.176058 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad86d851-a1c9-47b6-9f94-28176e2c1e85-kube-api-access-6rwqf" (OuterVolumeSpecName: "kube-api-access-6rwqf") pod "ad86d851-a1c9-47b6-9f94-28176e2c1e85" (UID: "ad86d851-a1c9-47b6-9f94-28176e2c1e85"). InnerVolumeSpecName "kube-api-access-6rwqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.181925 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e702aaf2-e5aa-43ca-a668-c743d706ab47-kube-api-access-sprzj" (OuterVolumeSpecName: "kube-api-access-sprzj") pod "e702aaf2-e5aa-43ca-a668-c743d706ab47" (UID: "e702aaf2-e5aa-43ca-a668-c743d706ab47"). InnerVolumeSpecName "kube-api-access-sprzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.259990 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klvvq\" (UniqueName: \"kubernetes.io/projected/ec55afb7-d18a-449e-b32b-859da8cb7d47-kube-api-access-klvvq\") pod \"ec55afb7-d18a-449e-b32b-859da8cb7d47\" (UID: \"ec55afb7-d18a-449e-b32b-859da8cb7d47\") " Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.260932 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twkbs\" (UniqueName: \"kubernetes.io/projected/bac1f2fc-bf4e-4b73-b0b4-433b3b38e333-kube-api-access-twkbs\") pod \"bac1f2fc-bf4e-4b73-b0b4-433b3b38e333\" (UID: \"bac1f2fc-bf4e-4b73-b0b4-433b3b38e333\") " Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.261024 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec55afb7-d18a-449e-b32b-859da8cb7d47-operator-scripts\") pod \"ec55afb7-d18a-449e-b32b-859da8cb7d47\" (UID: \"ec55afb7-d18a-449e-b32b-859da8cb7d47\") " Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.261090 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bac1f2fc-bf4e-4b73-b0b4-433b3b38e333-operator-scripts\") pod \"bac1f2fc-bf4e-4b73-b0b4-433b3b38e333\" (UID: \"bac1f2fc-bf4e-4b73-b0b4-433b3b38e333\") " Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.261593 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad86d851-a1c9-47b6-9f94-28176e2c1e85-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.261663 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rwqf\" (UniqueName: \"kubernetes.io/projected/ad86d851-a1c9-47b6-9f94-28176e2c1e85-kube-api-access-6rwqf\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.261726 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e702aaf2-e5aa-43ca-a668-c743d706ab47-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.261863 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sprzj\" (UniqueName: \"kubernetes.io/projected/e702aaf2-e5aa-43ca-a668-c743d706ab47-kube-api-access-sprzj\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.262567 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bac1f2fc-bf4e-4b73-b0b4-433b3b38e333-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bac1f2fc-bf4e-4b73-b0b4-433b3b38e333" (UID: "bac1f2fc-bf4e-4b73-b0b4-433b3b38e333"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.262633 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec55afb7-d18a-449e-b32b-859da8cb7d47-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ec55afb7-d18a-449e-b32b-859da8cb7d47" (UID: "ec55afb7-d18a-449e-b32b-859da8cb7d47"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.266285 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec55afb7-d18a-449e-b32b-859da8cb7d47-kube-api-access-klvvq" (OuterVolumeSpecName: "kube-api-access-klvvq") pod "ec55afb7-d18a-449e-b32b-859da8cb7d47" (UID: "ec55afb7-d18a-449e-b32b-859da8cb7d47"). InnerVolumeSpecName "kube-api-access-klvvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.268597 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bac1f2fc-bf4e-4b73-b0b4-433b3b38e333-kube-api-access-twkbs" (OuterVolumeSpecName: "kube-api-access-twkbs") pod "bac1f2fc-bf4e-4b73-b0b4-433b3b38e333" (UID: "bac1f2fc-bf4e-4b73-b0b4-433b3b38e333"). InnerVolumeSpecName "kube-api-access-twkbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.363323 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klvvq\" (UniqueName: \"kubernetes.io/projected/ec55afb7-d18a-449e-b32b-859da8cb7d47-kube-api-access-klvvq\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.363365 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twkbs\" (UniqueName: \"kubernetes.io/projected/bac1f2fc-bf4e-4b73-b0b4-433b3b38e333-kube-api-access-twkbs\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.363381 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec55afb7-d18a-449e-b32b-859da8cb7d47-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.363394 4789 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bac1f2fc-bf4e-4b73-b0b4-433b3b38e333-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.553428 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-6ae9-account-create-nzsmf" event={"ID":"ad86d851-a1c9-47b6-9f94-28176e2c1e85","Type":"ContainerDied","Data":"da5f15b1b5ae99b0d7a5cb24be69bad148f0e6c845ae7a11c62d8e7f98eb7856"} Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.553769 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da5f15b1b5ae99b0d7a5cb24be69bad148f0e6c845ae7a11c62d8e7f98eb7856" Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.553878 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6ae9-account-create-nzsmf" Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.560513 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8356-account-create-q8xxw" Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.561185 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8356-account-create-q8xxw" event={"ID":"bac1f2fc-bf4e-4b73-b0b4-433b3b38e333","Type":"ContainerDied","Data":"b683ed8320ad32d8bffbd6176d2130a09074339a92a1114b8ab6c8c3e060d41e"} Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.561225 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b683ed8320ad32d8bffbd6176d2130a09074339a92a1114b8ab6c8c3e060d41e" Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.574967 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"79e21ee2-c69a-4744-a817-50101e626dac","Type":"ContainerStarted","Data":"5b405c41e720d42db1a92ef71c2bdf5ee6a86620ae549c2adc9feb04167a8c60"} Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.577006 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dj46k" event={"ID":"e702aaf2-e5aa-43ca-a668-c743d706ab47","Type":"ContainerDied","Data":"c976953a52a0937effd219c0b8bae0843a03039027a2f61e58678af40a4558a0"} Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.577093 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c976953a52a0937effd219c0b8bae0843a03039027a2f61e58678af40a4558a0" Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.577200 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dj46k" Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.581401 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c151-account-create-2lvng" event={"ID":"ec55afb7-d18a-449e-b32b-859da8cb7d47","Type":"ContainerDied","Data":"533c09584146ee1885bf2f3fc9fa5442c580f55beb8a29f1a98576d107f22349"} Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.581530 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="533c09584146ee1885bf2f3fc9fa5442c580f55beb8a29f1a98576d107f22349" Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.582369 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c151-account-create-2lvng" Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.585011 4789 generic.go:334] "Generic (PLEG): container finished" podID="0896441a-c9db-4517-ae60-e0afa4cee74e" containerID="dfb9ceac80af2c7120075fa098eca7f07ce155210cccf3d12c4a88c193a92986" exitCode=137 Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.585054 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0896441a-c9db-4517-ae60-e0afa4cee74e","Type":"ContainerDied","Data":"dfb9ceac80af2c7120075fa098eca7f07ce155210cccf3d12c4a88c193a92986"} Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.703845 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 11:47:54 crc kubenswrapper[4789]: E1124 11:47:54.816106 4789 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbac1f2fc_bf4e_4b73_b0b4_433b3b38e333.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbac1f2fc_bf4e_4b73_b0b4_433b3b38e333.slice/crio-b683ed8320ad32d8bffbd6176d2130a09074339a92a1114b8ab6c8c3e060d41e\": RecentStats: unable to find data in memory cache]" Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.876024 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d55kq\" (UniqueName: \"kubernetes.io/projected/0896441a-c9db-4517-ae60-e0afa4cee74e-kube-api-access-d55kq\") pod \"0896441a-c9db-4517-ae60-e0afa4cee74e\" (UID: \"0896441a-c9db-4517-ae60-e0afa4cee74e\") " Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.876404 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0896441a-c9db-4517-ae60-e0afa4cee74e-config-data-custom\") pod \"0896441a-c9db-4517-ae60-e0afa4cee74e\" (UID: \"0896441a-c9db-4517-ae60-e0afa4cee74e\") " Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.876595 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0896441a-c9db-4517-ae60-e0afa4cee74e-logs\") pod \"0896441a-c9db-4517-ae60-e0afa4cee74e\" (UID: \"0896441a-c9db-4517-ae60-e0afa4cee74e\") " Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.876697 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0896441a-c9db-4517-ae60-e0afa4cee74e-etc-machine-id\") pod \"0896441a-c9db-4517-ae60-e0afa4cee74e\" (UID: \"0896441a-c9db-4517-ae60-e0afa4cee74e\") " Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.876794 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0896441a-c9db-4517-ae60-e0afa4cee74e-combined-ca-bundle\") pod \"0896441a-c9db-4517-ae60-e0afa4cee74e\" (UID: \"0896441a-c9db-4517-ae60-e0afa4cee74e\") " Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.876911 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0896441a-c9db-4517-ae60-e0afa4cee74e-scripts\") pod \"0896441a-c9db-4517-ae60-e0afa4cee74e\" (UID: \"0896441a-c9db-4517-ae60-e0afa4cee74e\") " Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.877017 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0896441a-c9db-4517-ae60-e0afa4cee74e-config-data\") pod \"0896441a-c9db-4517-ae60-e0afa4cee74e\" (UID: \"0896441a-c9db-4517-ae60-e0afa4cee74e\") " Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.877399 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0896441a-c9db-4517-ae60-e0afa4cee74e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0896441a-c9db-4517-ae60-e0afa4cee74e" (UID: "0896441a-c9db-4517-ae60-e0afa4cee74e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.877741 4789 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0896441a-c9db-4517-ae60-e0afa4cee74e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.879773 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0896441a-c9db-4517-ae60-e0afa4cee74e-logs" (OuterVolumeSpecName: "logs") pod "0896441a-c9db-4517-ae60-e0afa4cee74e" (UID: "0896441a-c9db-4517-ae60-e0afa4cee74e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.879902 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0896441a-c9db-4517-ae60-e0afa4cee74e-kube-api-access-d55kq" (OuterVolumeSpecName: "kube-api-access-d55kq") pod "0896441a-c9db-4517-ae60-e0afa4cee74e" (UID: "0896441a-c9db-4517-ae60-e0afa4cee74e"). InnerVolumeSpecName "kube-api-access-d55kq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.882883 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0896441a-c9db-4517-ae60-e0afa4cee74e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0896441a-c9db-4517-ae60-e0afa4cee74e" (UID: "0896441a-c9db-4517-ae60-e0afa4cee74e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.892607 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0896441a-c9db-4517-ae60-e0afa4cee74e-scripts" (OuterVolumeSpecName: "scripts") pod "0896441a-c9db-4517-ae60-e0afa4cee74e" (UID: "0896441a-c9db-4517-ae60-e0afa4cee74e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.921154 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0896441a-c9db-4517-ae60-e0afa4cee74e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0896441a-c9db-4517-ae60-e0afa4cee74e" (UID: "0896441a-c9db-4517-ae60-e0afa4cee74e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.938568 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0896441a-c9db-4517-ae60-e0afa4cee74e-config-data" (OuterVolumeSpecName: "config-data") pod "0896441a-c9db-4517-ae60-e0afa4cee74e" (UID: "0896441a-c9db-4517-ae60-e0afa4cee74e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.980001 4789 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0896441a-c9db-4517-ae60-e0afa4cee74e-logs\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.980149 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0896441a-c9db-4517-ae60-e0afa4cee74e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.980225 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0896441a-c9db-4517-ae60-e0afa4cee74e-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.980309 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0896441a-c9db-4517-ae60-e0afa4cee74e-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.980385 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d55kq\" (UniqueName: \"kubernetes.io/projected/0896441a-c9db-4517-ae60-e0afa4cee74e-kube-api-access-d55kq\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:54 crc kubenswrapper[4789]: I1124 11:47:54.980470 4789 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0896441a-c9db-4517-ae60-e0afa4cee74e-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.599555 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0896441a-c9db-4517-ae60-e0afa4cee74e","Type":"ContainerDied","Data":"64283b1edbaba74d9344d2d371168f1278799683341492ec6eef87bb1601cc7d"} Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.599570 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.599875 4789 scope.go:117] "RemoveContainer" containerID="dfb9ceac80af2c7120075fa098eca7f07ce155210cccf3d12c4a88c193a92986" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.607390 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"79e21ee2-c69a-4744-a817-50101e626dac","Type":"ContainerStarted","Data":"6805ad9bc82bbbed1ffc6fd4bea04ee761cf45e349b5bdc565358c5c2b818cc1"} Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.635512 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.645985 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.672734 4789 scope.go:117] "RemoveContainer" containerID="5952190f1db4df3f399bb853cfb5c572f8f671f1fb96ed9693babbe863d1e21c" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.746260 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 24 11:47:55 crc kubenswrapper[4789]: E1124 11:47:55.747658 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a6b15a6-5d09-4cb0-ab4e-bd69b568c5e0" containerName="mariadb-database-create" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.747678 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a6b15a6-5d09-4cb0-ab4e-bd69b568c5e0" containerName="mariadb-database-create" Nov 24 11:47:55 crc kubenswrapper[4789]: E1124 11:47:55.747700 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0896441a-c9db-4517-ae60-e0afa4cee74e" containerName="cinder-api" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.747707 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="0896441a-c9db-4517-ae60-e0afa4cee74e" containerName="cinder-api" Nov 24 11:47:55 crc kubenswrapper[4789]: E1124 11:47:55.747725 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bac1f2fc-bf4e-4b73-b0b4-433b3b38e333" containerName="mariadb-account-create" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.747731 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="bac1f2fc-bf4e-4b73-b0b4-433b3b38e333" containerName="mariadb-account-create" Nov 24 11:47:55 crc kubenswrapper[4789]: E1124 11:47:55.747753 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6d7d899-a03a-4029-8316-b8388df47987" containerName="mariadb-database-create" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.747759 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6d7d899-a03a-4029-8316-b8388df47987" containerName="mariadb-database-create" Nov 24 11:47:55 crc kubenswrapper[4789]: E1124 11:47:55.747773 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec55afb7-d18a-449e-b32b-859da8cb7d47" containerName="mariadb-account-create" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.747780 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec55afb7-d18a-449e-b32b-859da8cb7d47" containerName="mariadb-account-create" Nov 24 11:47:55 crc kubenswrapper[4789]: E1124 11:47:55.747792 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e702aaf2-e5aa-43ca-a668-c743d706ab47" containerName="mariadb-database-create" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.747974 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="e702aaf2-e5aa-43ca-a668-c743d706ab47" containerName="mariadb-database-create" Nov 24 11:47:55 crc kubenswrapper[4789]: E1124 11:47:55.748006 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0896441a-c9db-4517-ae60-e0afa4cee74e" containerName="cinder-api-log" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.748013 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="0896441a-c9db-4517-ae60-e0afa4cee74e" containerName="cinder-api-log" Nov 24 11:47:55 crc kubenswrapper[4789]: E1124 11:47:55.748038 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad86d851-a1c9-47b6-9f94-28176e2c1e85" containerName="mariadb-account-create" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.748044 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad86d851-a1c9-47b6-9f94-28176e2c1e85" containerName="mariadb-account-create" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.748674 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a6b15a6-5d09-4cb0-ab4e-bd69b568c5e0" containerName="mariadb-database-create" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.748698 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="e702aaf2-e5aa-43ca-a668-c743d706ab47" containerName="mariadb-database-create" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.748709 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec55afb7-d18a-449e-b32b-859da8cb7d47" containerName="mariadb-account-create" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.748724 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="bac1f2fc-bf4e-4b73-b0b4-433b3b38e333" containerName="mariadb-account-create" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.748743 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6d7d899-a03a-4029-8316-b8388df47987" containerName="mariadb-database-create" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.748760 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="0896441a-c9db-4517-ae60-e0afa4cee74e" containerName="cinder-api" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.748959 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="0896441a-c9db-4517-ae60-e0afa4cee74e" containerName="cinder-api-log" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.748981 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad86d851-a1c9-47b6-9f94-28176e2c1e85" containerName="mariadb-account-create" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.754377 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.758110 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.758314 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.758632 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.767923 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.817856 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb4a54b4-60e2-46ee-a063-e70757b214d2-logs\") pod \"cinder-api-0\" (UID: \"bb4a54b4-60e2-46ee-a063-e70757b214d2\") " pod="openstack/cinder-api-0" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.818003 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb4a54b4-60e2-46ee-a063-e70757b214d2-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"bb4a54b4-60e2-46ee-a063-e70757b214d2\") " pod="openstack/cinder-api-0" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.818061 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bb4a54b4-60e2-46ee-a063-e70757b214d2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bb4a54b4-60e2-46ee-a063-e70757b214d2\") " pod="openstack/cinder-api-0" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.818084 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb4a54b4-60e2-46ee-a063-e70757b214d2-public-tls-certs\") pod \"cinder-api-0\" (UID: \"bb4a54b4-60e2-46ee-a063-e70757b214d2\") " pod="openstack/cinder-api-0" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.818105 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fl9j\" (UniqueName: \"kubernetes.io/projected/bb4a54b4-60e2-46ee-a063-e70757b214d2-kube-api-access-9fl9j\") pod \"cinder-api-0\" (UID: \"bb4a54b4-60e2-46ee-a063-e70757b214d2\") " pod="openstack/cinder-api-0" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.818162 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb4a54b4-60e2-46ee-a063-e70757b214d2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bb4a54b4-60e2-46ee-a063-e70757b214d2\") " pod="openstack/cinder-api-0" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.818210 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb4a54b4-60e2-46ee-a063-e70757b214d2-config-data-custom\") pod \"cinder-api-0\" (UID: \"bb4a54b4-60e2-46ee-a063-e70757b214d2\") " pod="openstack/cinder-api-0" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.818293 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb4a54b4-60e2-46ee-a063-e70757b214d2-config-data\") pod \"cinder-api-0\" (UID: \"bb4a54b4-60e2-46ee-a063-e70757b214d2\") " pod="openstack/cinder-api-0" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.818313 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb4a54b4-60e2-46ee-a063-e70757b214d2-scripts\") pod \"cinder-api-0\" (UID: \"bb4a54b4-60e2-46ee-a063-e70757b214d2\") " pod="openstack/cinder-api-0" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.919181 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb4a54b4-60e2-46ee-a063-e70757b214d2-config-data\") pod \"cinder-api-0\" (UID: \"bb4a54b4-60e2-46ee-a063-e70757b214d2\") " pod="openstack/cinder-api-0" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.919225 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb4a54b4-60e2-46ee-a063-e70757b214d2-scripts\") pod \"cinder-api-0\" (UID: \"bb4a54b4-60e2-46ee-a063-e70757b214d2\") " pod="openstack/cinder-api-0" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.919263 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb4a54b4-60e2-46ee-a063-e70757b214d2-logs\") pod \"cinder-api-0\" (UID: \"bb4a54b4-60e2-46ee-a063-e70757b214d2\") " pod="openstack/cinder-api-0" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.919306 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb4a54b4-60e2-46ee-a063-e70757b214d2-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"bb4a54b4-60e2-46ee-a063-e70757b214d2\") " pod="openstack/cinder-api-0" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.919331 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bb4a54b4-60e2-46ee-a063-e70757b214d2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bb4a54b4-60e2-46ee-a063-e70757b214d2\") " pod="openstack/cinder-api-0" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.919347 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb4a54b4-60e2-46ee-a063-e70757b214d2-public-tls-certs\") pod \"cinder-api-0\" (UID: \"bb4a54b4-60e2-46ee-a063-e70757b214d2\") " pod="openstack/cinder-api-0" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.919362 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fl9j\" (UniqueName: \"kubernetes.io/projected/bb4a54b4-60e2-46ee-a063-e70757b214d2-kube-api-access-9fl9j\") pod \"cinder-api-0\" (UID: \"bb4a54b4-60e2-46ee-a063-e70757b214d2\") " pod="openstack/cinder-api-0" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.919394 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb4a54b4-60e2-46ee-a063-e70757b214d2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bb4a54b4-60e2-46ee-a063-e70757b214d2\") " pod="openstack/cinder-api-0" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.919425 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb4a54b4-60e2-46ee-a063-e70757b214d2-config-data-custom\") pod \"cinder-api-0\" (UID: \"bb4a54b4-60e2-46ee-a063-e70757b214d2\") " pod="openstack/cinder-api-0" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.920390 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb4a54b4-60e2-46ee-a063-e70757b214d2-logs\") pod \"cinder-api-0\" (UID: \"bb4a54b4-60e2-46ee-a063-e70757b214d2\") " pod="openstack/cinder-api-0" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.920410 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bb4a54b4-60e2-46ee-a063-e70757b214d2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bb4a54b4-60e2-46ee-a063-e70757b214d2\") " pod="openstack/cinder-api-0" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.925215 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb4a54b4-60e2-46ee-a063-e70757b214d2-config-data-custom\") pod \"cinder-api-0\" (UID: \"bb4a54b4-60e2-46ee-a063-e70757b214d2\") " pod="openstack/cinder-api-0" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.925539 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb4a54b4-60e2-46ee-a063-e70757b214d2-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"bb4a54b4-60e2-46ee-a063-e70757b214d2\") " pod="openstack/cinder-api-0" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.927027 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb4a54b4-60e2-46ee-a063-e70757b214d2-public-tls-certs\") pod \"cinder-api-0\" (UID: \"bb4a54b4-60e2-46ee-a063-e70757b214d2\") " pod="openstack/cinder-api-0" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.927368 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb4a54b4-60e2-46ee-a063-e70757b214d2-config-data\") pod \"cinder-api-0\" (UID: \"bb4a54b4-60e2-46ee-a063-e70757b214d2\") " pod="openstack/cinder-api-0" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.927692 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb4a54b4-60e2-46ee-a063-e70757b214d2-scripts\") pod \"cinder-api-0\" (UID: \"bb4a54b4-60e2-46ee-a063-e70757b214d2\") " pod="openstack/cinder-api-0" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.927823 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb4a54b4-60e2-46ee-a063-e70757b214d2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bb4a54b4-60e2-46ee-a063-e70757b214d2\") " pod="openstack/cinder-api-0" Nov 24 11:47:55 crc kubenswrapper[4789]: I1124 11:47:55.942927 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fl9j\" (UniqueName: \"kubernetes.io/projected/bb4a54b4-60e2-46ee-a063-e70757b214d2-kube-api-access-9fl9j\") pod \"cinder-api-0\" (UID: \"bb4a54b4-60e2-46ee-a063-e70757b214d2\") " pod="openstack/cinder-api-0" Nov 24 11:47:56 crc kubenswrapper[4789]: I1124 11:47:56.086848 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 11:47:56 crc kubenswrapper[4789]: I1124 11:47:56.191087 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0896441a-c9db-4517-ae60-e0afa4cee74e" path="/var/lib/kubelet/pods/0896441a-c9db-4517-ae60-e0afa4cee74e/volumes" Nov 24 11:47:56 crc kubenswrapper[4789]: I1124 11:47:56.578018 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 11:47:56 crc kubenswrapper[4789]: I1124 11:47:56.618718 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"79e21ee2-c69a-4744-a817-50101e626dac","Type":"ContainerStarted","Data":"ac0588993b8f16e3332b7ae952e9d03cfb4a5f4d410016e8160e22ed51ca35f4"} Nov 24 11:47:56 crc kubenswrapper[4789]: I1124 11:47:56.618875 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="79e21ee2-c69a-4744-a817-50101e626dac" containerName="ceilometer-central-agent" containerID="cri-o://fcc3711273bf11a0edf0679fa48c65c0309e7ad0997e7b2295803810ca491ecd" gracePeriod=30 Nov 24 11:47:56 crc kubenswrapper[4789]: I1124 11:47:56.618940 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 11:47:56 crc kubenswrapper[4789]: I1124 11:47:56.619202 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="79e21ee2-c69a-4744-a817-50101e626dac" containerName="proxy-httpd" containerID="cri-o://ac0588993b8f16e3332b7ae952e9d03cfb4a5f4d410016e8160e22ed51ca35f4" gracePeriod=30 Nov 24 11:47:56 crc kubenswrapper[4789]: I1124 11:47:56.619244 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="79e21ee2-c69a-4744-a817-50101e626dac" containerName="sg-core" containerID="cri-o://6805ad9bc82bbbed1ffc6fd4bea04ee761cf45e349b5bdc565358c5c2b818cc1" gracePeriod=30 Nov 24 11:47:56 crc kubenswrapper[4789]: I1124 11:47:56.619277 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="79e21ee2-c69a-4744-a817-50101e626dac" containerName="ceilometer-notification-agent" containerID="cri-o://5b405c41e720d42db1a92ef71c2bdf5ee6a86620ae549c2adc9feb04167a8c60" gracePeriod=30 Nov 24 11:47:56 crc kubenswrapper[4789]: I1124 11:47:56.621186 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bb4a54b4-60e2-46ee-a063-e70757b214d2","Type":"ContainerStarted","Data":"af855d329bf455ed6129f9793d2241d0a96cbdd962898cafaca58217d6c0848b"} Nov 24 11:47:56 crc kubenswrapper[4789]: I1124 11:47:56.642648 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.322991366 podStartE2EDuration="5.642619088s" podCreationTimestamp="2025-11-24 11:47:51 +0000 UTC" firstStartedPulling="2025-11-24 11:47:52.676983942 +0000 UTC m=+1055.259455321" lastFinishedPulling="2025-11-24 11:47:55.996611664 +0000 UTC m=+1058.579083043" observedRunningTime="2025-11-24 11:47:56.641566884 +0000 UTC m=+1059.224038263" watchObservedRunningTime="2025-11-24 11:47:56.642619088 +0000 UTC m=+1059.225090467" Nov 24 11:47:57 crc kubenswrapper[4789]: I1124 11:47:57.210098 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-r7v4g"] Nov 24 11:47:57 crc kubenswrapper[4789]: I1124 11:47:57.211195 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-r7v4g" Nov 24 11:47:57 crc kubenswrapper[4789]: I1124 11:47:57.214746 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-zk8mz" Nov 24 11:47:57 crc kubenswrapper[4789]: I1124 11:47:57.214784 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 24 11:47:57 crc kubenswrapper[4789]: I1124 11:47:57.214976 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 24 11:47:57 crc kubenswrapper[4789]: I1124 11:47:57.234780 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-r7v4g"] Nov 24 11:47:57 crc kubenswrapper[4789]: I1124 11:47:57.302327 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee3fcc6a-5c80-4e48-819d-defc5053b969-config-data\") pod \"nova-cell0-conductor-db-sync-r7v4g\" (UID: \"ee3fcc6a-5c80-4e48-819d-defc5053b969\") " pod="openstack/nova-cell0-conductor-db-sync-r7v4g" Nov 24 11:47:57 crc kubenswrapper[4789]: I1124 11:47:57.303236 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slvjx\" (UniqueName: \"kubernetes.io/projected/ee3fcc6a-5c80-4e48-819d-defc5053b969-kube-api-access-slvjx\") pod \"nova-cell0-conductor-db-sync-r7v4g\" (UID: \"ee3fcc6a-5c80-4e48-819d-defc5053b969\") " pod="openstack/nova-cell0-conductor-db-sync-r7v4g" Nov 24 11:47:57 crc kubenswrapper[4789]: I1124 11:47:57.303292 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee3fcc6a-5c80-4e48-819d-defc5053b969-scripts\") pod \"nova-cell0-conductor-db-sync-r7v4g\" (UID: \"ee3fcc6a-5c80-4e48-819d-defc5053b969\") " pod="openstack/nova-cell0-conductor-db-sync-r7v4g" Nov 24 11:47:57 crc kubenswrapper[4789]: I1124 11:47:57.303312 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee3fcc6a-5c80-4e48-819d-defc5053b969-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-r7v4g\" (UID: \"ee3fcc6a-5c80-4e48-819d-defc5053b969\") " pod="openstack/nova-cell0-conductor-db-sync-r7v4g" Nov 24 11:47:57 crc kubenswrapper[4789]: I1124 11:47:57.409255 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee3fcc6a-5c80-4e48-819d-defc5053b969-config-data\") pod \"nova-cell0-conductor-db-sync-r7v4g\" (UID: \"ee3fcc6a-5c80-4e48-819d-defc5053b969\") " pod="openstack/nova-cell0-conductor-db-sync-r7v4g" Nov 24 11:47:57 crc kubenswrapper[4789]: I1124 11:47:57.409323 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slvjx\" (UniqueName: \"kubernetes.io/projected/ee3fcc6a-5c80-4e48-819d-defc5053b969-kube-api-access-slvjx\") pod \"nova-cell0-conductor-db-sync-r7v4g\" (UID: \"ee3fcc6a-5c80-4e48-819d-defc5053b969\") " pod="openstack/nova-cell0-conductor-db-sync-r7v4g" Nov 24 11:47:57 crc kubenswrapper[4789]: I1124 11:47:57.409389 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee3fcc6a-5c80-4e48-819d-defc5053b969-scripts\") pod \"nova-cell0-conductor-db-sync-r7v4g\" (UID: \"ee3fcc6a-5c80-4e48-819d-defc5053b969\") " pod="openstack/nova-cell0-conductor-db-sync-r7v4g" Nov 24 11:47:57 crc kubenswrapper[4789]: I1124 11:47:57.409415 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee3fcc6a-5c80-4e48-819d-defc5053b969-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-r7v4g\" (UID: \"ee3fcc6a-5c80-4e48-819d-defc5053b969\") " pod="openstack/nova-cell0-conductor-db-sync-r7v4g" Nov 24 11:47:57 crc kubenswrapper[4789]: I1124 11:47:57.418328 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee3fcc6a-5c80-4e48-819d-defc5053b969-scripts\") pod \"nova-cell0-conductor-db-sync-r7v4g\" (UID: \"ee3fcc6a-5c80-4e48-819d-defc5053b969\") " pod="openstack/nova-cell0-conductor-db-sync-r7v4g" Nov 24 11:47:57 crc kubenswrapper[4789]: I1124 11:47:57.421017 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee3fcc6a-5c80-4e48-819d-defc5053b969-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-r7v4g\" (UID: \"ee3fcc6a-5c80-4e48-819d-defc5053b969\") " pod="openstack/nova-cell0-conductor-db-sync-r7v4g" Nov 24 11:47:57 crc kubenswrapper[4789]: I1124 11:47:57.432032 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee3fcc6a-5c80-4e48-819d-defc5053b969-config-data\") pod \"nova-cell0-conductor-db-sync-r7v4g\" (UID: \"ee3fcc6a-5c80-4e48-819d-defc5053b969\") " pod="openstack/nova-cell0-conductor-db-sync-r7v4g" Nov 24 11:47:57 crc kubenswrapper[4789]: I1124 11:47:57.435361 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slvjx\" (UniqueName: \"kubernetes.io/projected/ee3fcc6a-5c80-4e48-819d-defc5053b969-kube-api-access-slvjx\") pod \"nova-cell0-conductor-db-sync-r7v4g\" (UID: \"ee3fcc6a-5c80-4e48-819d-defc5053b969\") " pod="openstack/nova-cell0-conductor-db-sync-r7v4g" Nov 24 11:47:57 crc kubenswrapper[4789]: I1124 11:47:57.531756 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-r7v4g" Nov 24 11:47:57 crc kubenswrapper[4789]: I1124 11:47:57.694043 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bb4a54b4-60e2-46ee-a063-e70757b214d2","Type":"ContainerStarted","Data":"391ef8c43a3a0b519fe6e04de0d3f2b0744921e8960c74d0edbbd5534493b323"} Nov 24 11:47:57 crc kubenswrapper[4789]: I1124 11:47:57.719586 4789 generic.go:334] "Generic (PLEG): container finished" podID="79e21ee2-c69a-4744-a817-50101e626dac" containerID="6805ad9bc82bbbed1ffc6fd4bea04ee761cf45e349b5bdc565358c5c2b818cc1" exitCode=2 Nov 24 11:47:57 crc kubenswrapper[4789]: I1124 11:47:57.719612 4789 generic.go:334] "Generic (PLEG): container finished" podID="79e21ee2-c69a-4744-a817-50101e626dac" containerID="5b405c41e720d42db1a92ef71c2bdf5ee6a86620ae549c2adc9feb04167a8c60" exitCode=0 Nov 24 11:47:57 crc kubenswrapper[4789]: I1124 11:47:57.719630 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"79e21ee2-c69a-4744-a817-50101e626dac","Type":"ContainerDied","Data":"6805ad9bc82bbbed1ffc6fd4bea04ee761cf45e349b5bdc565358c5c2b818cc1"} Nov 24 11:47:57 crc kubenswrapper[4789]: I1124 11:47:57.719661 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"79e21ee2-c69a-4744-a817-50101e626dac","Type":"ContainerDied","Data":"5b405c41e720d42db1a92ef71c2bdf5ee6a86620ae549c2adc9feb04167a8c60"} Nov 24 11:47:58 crc kubenswrapper[4789]: I1124 11:47:58.004493 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-r7v4g"] Nov 24 11:47:58 crc kubenswrapper[4789]: I1124 11:47:58.769958 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-r7v4g" event={"ID":"ee3fcc6a-5c80-4e48-819d-defc5053b969","Type":"ContainerStarted","Data":"7a73a303102e34b7165a27480ff5a24b411cabe70461de624439dbfbe75b63d8"} Nov 24 11:47:58 crc kubenswrapper[4789]: I1124 11:47:58.774360 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bb4a54b4-60e2-46ee-a063-e70757b214d2","Type":"ContainerStarted","Data":"432d02c82b964cf65c2c7b7dc9544e572bed64ca32dafb8f0e0fbbabff11f72d"} Nov 24 11:47:58 crc kubenswrapper[4789]: I1124 11:47:58.775826 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 24 11:47:58 crc kubenswrapper[4789]: I1124 11:47:58.801730 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.801702837 podStartE2EDuration="3.801702837s" podCreationTimestamp="2025-11-24 11:47:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:47:58.791192074 +0000 UTC m=+1061.373663453" watchObservedRunningTime="2025-11-24 11:47:58.801702837 +0000 UTC m=+1061.384174216" Nov 24 11:48:01 crc kubenswrapper[4789]: I1124 11:48:01.811355 4789 generic.go:334] "Generic (PLEG): container finished" podID="79e21ee2-c69a-4744-a817-50101e626dac" containerID="fcc3711273bf11a0edf0679fa48c65c0309e7ad0997e7b2295803810ca491ecd" exitCode=0 Nov 24 11:48:01 crc kubenswrapper[4789]: I1124 11:48:01.811445 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"79e21ee2-c69a-4744-a817-50101e626dac","Type":"ContainerDied","Data":"fcc3711273bf11a0edf0679fa48c65c0309e7ad0997e7b2295803810ca491ecd"} Nov 24 11:48:05 crc kubenswrapper[4789]: I1124 11:48:05.864286 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-r7v4g" event={"ID":"ee3fcc6a-5c80-4e48-819d-defc5053b969","Type":"ContainerStarted","Data":"f15f6a3409a556aabb2720c270e24a3a6184887bbe0ffdb8a499fc3d96887905"} Nov 24 11:48:05 crc kubenswrapper[4789]: I1124 11:48:05.880958 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-r7v4g" podStartSLOduration=1.603901077 podStartE2EDuration="8.880944346s" podCreationTimestamp="2025-11-24 11:47:57 +0000 UTC" firstStartedPulling="2025-11-24 11:47:58.028363185 +0000 UTC m=+1060.610834564" lastFinishedPulling="2025-11-24 11:48:05.305406454 +0000 UTC m=+1067.887877833" observedRunningTime="2025-11-24 11:48:05.879820229 +0000 UTC m=+1068.462291648" watchObservedRunningTime="2025-11-24 11:48:05.880944346 +0000 UTC m=+1068.463415725" Nov 24 11:48:08 crc kubenswrapper[4789]: I1124 11:48:08.096522 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 24 11:48:17 crc kubenswrapper[4789]: I1124 11:48:17.988381 4789 generic.go:334] "Generic (PLEG): container finished" podID="ee3fcc6a-5c80-4e48-819d-defc5053b969" containerID="f15f6a3409a556aabb2720c270e24a3a6184887bbe0ffdb8a499fc3d96887905" exitCode=0 Nov 24 11:48:17 crc kubenswrapper[4789]: I1124 11:48:17.988560 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-r7v4g" event={"ID":"ee3fcc6a-5c80-4e48-819d-defc5053b969","Type":"ContainerDied","Data":"f15f6a3409a556aabb2720c270e24a3a6184887bbe0ffdb8a499fc3d96887905"} Nov 24 11:48:19 crc kubenswrapper[4789]: I1124 11:48:19.299945 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-r7v4g" Nov 24 11:48:19 crc kubenswrapper[4789]: I1124 11:48:19.434753 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee3fcc6a-5c80-4e48-819d-defc5053b969-config-data\") pod \"ee3fcc6a-5c80-4e48-819d-defc5053b969\" (UID: \"ee3fcc6a-5c80-4e48-819d-defc5053b969\") " Nov 24 11:48:19 crc kubenswrapper[4789]: I1124 11:48:19.434845 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee3fcc6a-5c80-4e48-819d-defc5053b969-combined-ca-bundle\") pod \"ee3fcc6a-5c80-4e48-819d-defc5053b969\" (UID: \"ee3fcc6a-5c80-4e48-819d-defc5053b969\") " Nov 24 11:48:19 crc kubenswrapper[4789]: I1124 11:48:19.434879 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee3fcc6a-5c80-4e48-819d-defc5053b969-scripts\") pod \"ee3fcc6a-5c80-4e48-819d-defc5053b969\" (UID: \"ee3fcc6a-5c80-4e48-819d-defc5053b969\") " Nov 24 11:48:19 crc kubenswrapper[4789]: I1124 11:48:19.434902 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slvjx\" (UniqueName: \"kubernetes.io/projected/ee3fcc6a-5c80-4e48-819d-defc5053b969-kube-api-access-slvjx\") pod \"ee3fcc6a-5c80-4e48-819d-defc5053b969\" (UID: \"ee3fcc6a-5c80-4e48-819d-defc5053b969\") " Nov 24 11:48:19 crc kubenswrapper[4789]: I1124 11:48:19.441201 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee3fcc6a-5c80-4e48-819d-defc5053b969-kube-api-access-slvjx" (OuterVolumeSpecName: "kube-api-access-slvjx") pod "ee3fcc6a-5c80-4e48-819d-defc5053b969" (UID: "ee3fcc6a-5c80-4e48-819d-defc5053b969"). InnerVolumeSpecName "kube-api-access-slvjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:48:19 crc kubenswrapper[4789]: I1124 11:48:19.442825 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee3fcc6a-5c80-4e48-819d-defc5053b969-scripts" (OuterVolumeSpecName: "scripts") pod "ee3fcc6a-5c80-4e48-819d-defc5053b969" (UID: "ee3fcc6a-5c80-4e48-819d-defc5053b969"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:48:19 crc kubenswrapper[4789]: I1124 11:48:19.463418 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee3fcc6a-5c80-4e48-819d-defc5053b969-config-data" (OuterVolumeSpecName: "config-data") pod "ee3fcc6a-5c80-4e48-819d-defc5053b969" (UID: "ee3fcc6a-5c80-4e48-819d-defc5053b969"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:48:19 crc kubenswrapper[4789]: I1124 11:48:19.480294 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee3fcc6a-5c80-4e48-819d-defc5053b969-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee3fcc6a-5c80-4e48-819d-defc5053b969" (UID: "ee3fcc6a-5c80-4e48-819d-defc5053b969"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:48:19 crc kubenswrapper[4789]: I1124 11:48:19.536783 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee3fcc6a-5c80-4e48-819d-defc5053b969-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:19 crc kubenswrapper[4789]: I1124 11:48:19.536828 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee3fcc6a-5c80-4e48-819d-defc5053b969-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:19 crc kubenswrapper[4789]: I1124 11:48:19.536844 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slvjx\" (UniqueName: \"kubernetes.io/projected/ee3fcc6a-5c80-4e48-819d-defc5053b969-kube-api-access-slvjx\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:19 crc kubenswrapper[4789]: I1124 11:48:19.536858 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee3fcc6a-5c80-4e48-819d-defc5053b969-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:20 crc kubenswrapper[4789]: I1124 11:48:20.008915 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-r7v4g" event={"ID":"ee3fcc6a-5c80-4e48-819d-defc5053b969","Type":"ContainerDied","Data":"7a73a303102e34b7165a27480ff5a24b411cabe70461de624439dbfbe75b63d8"} Nov 24 11:48:20 crc kubenswrapper[4789]: I1124 11:48:20.008969 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a73a303102e34b7165a27480ff5a24b411cabe70461de624439dbfbe75b63d8" Nov 24 11:48:20 crc kubenswrapper[4789]: I1124 11:48:20.009014 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-r7v4g" Nov 24 11:48:20 crc kubenswrapper[4789]: I1124 11:48:20.160417 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 11:48:20 crc kubenswrapper[4789]: I1124 11:48:20.162959 4789 patch_prober.go:28] interesting pod/machine-config-daemon-9czvn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:48:20 crc kubenswrapper[4789]: I1124 11:48:20.163021 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:48:20 crc kubenswrapper[4789]: E1124 11:48:20.163085 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee3fcc6a-5c80-4e48-819d-defc5053b969" containerName="nova-cell0-conductor-db-sync" Nov 24 11:48:20 crc kubenswrapper[4789]: I1124 11:48:20.163116 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee3fcc6a-5c80-4e48-819d-defc5053b969" containerName="nova-cell0-conductor-db-sync" Nov 24 11:48:20 crc kubenswrapper[4789]: I1124 11:48:20.164112 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee3fcc6a-5c80-4e48-819d-defc5053b969" containerName="nova-cell0-conductor-db-sync" Nov 24 11:48:20 crc kubenswrapper[4789]: I1124 11:48:20.165853 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 24 11:48:20 crc kubenswrapper[4789]: I1124 11:48:20.170091 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 24 11:48:20 crc kubenswrapper[4789]: I1124 11:48:20.170234 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-zk8mz" Nov 24 11:48:20 crc kubenswrapper[4789]: I1124 11:48:20.205408 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 11:48:20 crc kubenswrapper[4789]: I1124 11:48:20.248367 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68cf5f04-f863-4ee8-89e2-fe21038afe96-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"68cf5f04-f863-4ee8-89e2-fe21038afe96\") " pod="openstack/nova-cell0-conductor-0" Nov 24 11:48:20 crc kubenswrapper[4789]: I1124 11:48:20.248680 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blnkw\" (UniqueName: \"kubernetes.io/projected/68cf5f04-f863-4ee8-89e2-fe21038afe96-kube-api-access-blnkw\") pod \"nova-cell0-conductor-0\" (UID: \"68cf5f04-f863-4ee8-89e2-fe21038afe96\") " pod="openstack/nova-cell0-conductor-0" Nov 24 11:48:20 crc kubenswrapper[4789]: I1124 11:48:20.248805 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68cf5f04-f863-4ee8-89e2-fe21038afe96-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"68cf5f04-f863-4ee8-89e2-fe21038afe96\") " pod="openstack/nova-cell0-conductor-0" Nov 24 11:48:20 crc kubenswrapper[4789]: I1124 11:48:20.350708 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68cf5f04-f863-4ee8-89e2-fe21038afe96-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"68cf5f04-f863-4ee8-89e2-fe21038afe96\") " pod="openstack/nova-cell0-conductor-0" Nov 24 11:48:20 crc kubenswrapper[4789]: I1124 11:48:20.350847 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blnkw\" (UniqueName: \"kubernetes.io/projected/68cf5f04-f863-4ee8-89e2-fe21038afe96-kube-api-access-blnkw\") pod \"nova-cell0-conductor-0\" (UID: \"68cf5f04-f863-4ee8-89e2-fe21038afe96\") " pod="openstack/nova-cell0-conductor-0" Nov 24 11:48:20 crc kubenswrapper[4789]: I1124 11:48:20.350903 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68cf5f04-f863-4ee8-89e2-fe21038afe96-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"68cf5f04-f863-4ee8-89e2-fe21038afe96\") " pod="openstack/nova-cell0-conductor-0" Nov 24 11:48:20 crc kubenswrapper[4789]: I1124 11:48:20.355558 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68cf5f04-f863-4ee8-89e2-fe21038afe96-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"68cf5f04-f863-4ee8-89e2-fe21038afe96\") " pod="openstack/nova-cell0-conductor-0" Nov 24 11:48:20 crc kubenswrapper[4789]: I1124 11:48:20.371042 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68cf5f04-f863-4ee8-89e2-fe21038afe96-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"68cf5f04-f863-4ee8-89e2-fe21038afe96\") " pod="openstack/nova-cell0-conductor-0" Nov 24 11:48:20 crc kubenswrapper[4789]: I1124 11:48:20.376064 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blnkw\" (UniqueName: \"kubernetes.io/projected/68cf5f04-f863-4ee8-89e2-fe21038afe96-kube-api-access-blnkw\") pod \"nova-cell0-conductor-0\" (UID: \"68cf5f04-f863-4ee8-89e2-fe21038afe96\") " pod="openstack/nova-cell0-conductor-0" Nov 24 11:48:20 crc kubenswrapper[4789]: I1124 11:48:20.501566 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 24 11:48:20 crc kubenswrapper[4789]: I1124 11:48:20.973412 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 11:48:21 crc kubenswrapper[4789]: I1124 11:48:21.022564 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"68cf5f04-f863-4ee8-89e2-fe21038afe96","Type":"ContainerStarted","Data":"da00a4ca19bef11ffc546914fa324e8fa109f863dbc733411609e706e0dd410b"} Nov 24 11:48:21 crc kubenswrapper[4789]: I1124 11:48:21.995949 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="79e21ee2-c69a-4744-a817-50101e626dac" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 24 11:48:22 crc kubenswrapper[4789]: I1124 11:48:22.042638 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"68cf5f04-f863-4ee8-89e2-fe21038afe96","Type":"ContainerStarted","Data":"c1cd76bcb3e096b604011a156e2b3a6e8d9002bead0720fde66d35f1c4547ef2"} Nov 24 11:48:22 crc kubenswrapper[4789]: I1124 11:48:22.043997 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 24 11:48:22 crc kubenswrapper[4789]: I1124 11:48:22.081593 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.081574645 podStartE2EDuration="2.081574645s" podCreationTimestamp="2025-11-24 11:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:48:22.075289284 +0000 UTC m=+1084.657760673" watchObservedRunningTime="2025-11-24 11:48:22.081574645 +0000 UTC m=+1084.664046034" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.081574 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.104092 4789 generic.go:334] "Generic (PLEG): container finished" podID="79e21ee2-c69a-4744-a817-50101e626dac" containerID="ac0588993b8f16e3332b7ae952e9d03cfb4a5f4d410016e8160e22ed51ca35f4" exitCode=137 Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.104144 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"79e21ee2-c69a-4744-a817-50101e626dac","Type":"ContainerDied","Data":"ac0588993b8f16e3332b7ae952e9d03cfb4a5f4d410016e8160e22ed51ca35f4"} Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.104174 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"79e21ee2-c69a-4744-a817-50101e626dac","Type":"ContainerDied","Data":"eec11d348b1955c8a75c52efc64a213492f7303132a8d6d6530293ce4b7eadc5"} Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.104198 4789 scope.go:117] "RemoveContainer" containerID="ac0588993b8f16e3332b7ae952e9d03cfb4a5f4d410016e8160e22ed51ca35f4" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.104199 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.142665 4789 scope.go:117] "RemoveContainer" containerID="6805ad9bc82bbbed1ffc6fd4bea04ee761cf45e349b5bdc565358c5c2b818cc1" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.160551 4789 scope.go:117] "RemoveContainer" containerID="5b405c41e720d42db1a92ef71c2bdf5ee6a86620ae549c2adc9feb04167a8c60" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.178807 4789 scope.go:117] "RemoveContainer" containerID="fcc3711273bf11a0edf0679fa48c65c0309e7ad0997e7b2295803810ca491ecd" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.182056 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/79e21ee2-c69a-4744-a817-50101e626dac-sg-core-conf-yaml\") pod \"79e21ee2-c69a-4744-a817-50101e626dac\" (UID: \"79e21ee2-c69a-4744-a817-50101e626dac\") " Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.182092 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e21ee2-c69a-4744-a817-50101e626dac-combined-ca-bundle\") pod \"79e21ee2-c69a-4744-a817-50101e626dac\" (UID: \"79e21ee2-c69a-4744-a817-50101e626dac\") " Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.182121 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79e21ee2-c69a-4744-a817-50101e626dac-scripts\") pod \"79e21ee2-c69a-4744-a817-50101e626dac\" (UID: \"79e21ee2-c69a-4744-a817-50101e626dac\") " Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.182182 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79e21ee2-c69a-4744-a817-50101e626dac-run-httpd\") pod \"79e21ee2-c69a-4744-a817-50101e626dac\" (UID: \"79e21ee2-c69a-4744-a817-50101e626dac\") " Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.182262 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79e21ee2-c69a-4744-a817-50101e626dac-config-data\") pod \"79e21ee2-c69a-4744-a817-50101e626dac\" (UID: \"79e21ee2-c69a-4744-a817-50101e626dac\") " Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.182467 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjjlf\" (UniqueName: \"kubernetes.io/projected/79e21ee2-c69a-4744-a817-50101e626dac-kube-api-access-pjjlf\") pod \"79e21ee2-c69a-4744-a817-50101e626dac\" (UID: \"79e21ee2-c69a-4744-a817-50101e626dac\") " Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.182528 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79e21ee2-c69a-4744-a817-50101e626dac-log-httpd\") pod \"79e21ee2-c69a-4744-a817-50101e626dac\" (UID: \"79e21ee2-c69a-4744-a817-50101e626dac\") " Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.182955 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79e21ee2-c69a-4744-a817-50101e626dac-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "79e21ee2-c69a-4744-a817-50101e626dac" (UID: "79e21ee2-c69a-4744-a817-50101e626dac"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.183180 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79e21ee2-c69a-4744-a817-50101e626dac-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "79e21ee2-c69a-4744-a817-50101e626dac" (UID: "79e21ee2-c69a-4744-a817-50101e626dac"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.187362 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79e21ee2-c69a-4744-a817-50101e626dac-scripts" (OuterVolumeSpecName: "scripts") pod "79e21ee2-c69a-4744-a817-50101e626dac" (UID: "79e21ee2-c69a-4744-a817-50101e626dac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.189361 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79e21ee2-c69a-4744-a817-50101e626dac-kube-api-access-pjjlf" (OuterVolumeSpecName: "kube-api-access-pjjlf") pod "79e21ee2-c69a-4744-a817-50101e626dac" (UID: "79e21ee2-c69a-4744-a817-50101e626dac"). InnerVolumeSpecName "kube-api-access-pjjlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.209905 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79e21ee2-c69a-4744-a817-50101e626dac-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "79e21ee2-c69a-4744-a817-50101e626dac" (UID: "79e21ee2-c69a-4744-a817-50101e626dac"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.210305 4789 scope.go:117] "RemoveContainer" containerID="ac0588993b8f16e3332b7ae952e9d03cfb4a5f4d410016e8160e22ed51ca35f4" Nov 24 11:48:27 crc kubenswrapper[4789]: E1124 11:48:27.210953 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac0588993b8f16e3332b7ae952e9d03cfb4a5f4d410016e8160e22ed51ca35f4\": container with ID starting with ac0588993b8f16e3332b7ae952e9d03cfb4a5f4d410016e8160e22ed51ca35f4 not found: ID does not exist" containerID="ac0588993b8f16e3332b7ae952e9d03cfb4a5f4d410016e8160e22ed51ca35f4" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.211020 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac0588993b8f16e3332b7ae952e9d03cfb4a5f4d410016e8160e22ed51ca35f4"} err="failed to get container status \"ac0588993b8f16e3332b7ae952e9d03cfb4a5f4d410016e8160e22ed51ca35f4\": rpc error: code = NotFound desc = could not find container \"ac0588993b8f16e3332b7ae952e9d03cfb4a5f4d410016e8160e22ed51ca35f4\": container with ID starting with ac0588993b8f16e3332b7ae952e9d03cfb4a5f4d410016e8160e22ed51ca35f4 not found: ID does not exist" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.211058 4789 scope.go:117] "RemoveContainer" containerID="6805ad9bc82bbbed1ffc6fd4bea04ee761cf45e349b5bdc565358c5c2b818cc1" Nov 24 11:48:27 crc kubenswrapper[4789]: E1124 11:48:27.211472 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6805ad9bc82bbbed1ffc6fd4bea04ee761cf45e349b5bdc565358c5c2b818cc1\": container with ID starting with 6805ad9bc82bbbed1ffc6fd4bea04ee761cf45e349b5bdc565358c5c2b818cc1 not found: ID does not exist" containerID="6805ad9bc82bbbed1ffc6fd4bea04ee761cf45e349b5bdc565358c5c2b818cc1" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.211508 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6805ad9bc82bbbed1ffc6fd4bea04ee761cf45e349b5bdc565358c5c2b818cc1"} err="failed to get container status \"6805ad9bc82bbbed1ffc6fd4bea04ee761cf45e349b5bdc565358c5c2b818cc1\": rpc error: code = NotFound desc = could not find container \"6805ad9bc82bbbed1ffc6fd4bea04ee761cf45e349b5bdc565358c5c2b818cc1\": container with ID starting with 6805ad9bc82bbbed1ffc6fd4bea04ee761cf45e349b5bdc565358c5c2b818cc1 not found: ID does not exist" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.211535 4789 scope.go:117] "RemoveContainer" containerID="5b405c41e720d42db1a92ef71c2bdf5ee6a86620ae549c2adc9feb04167a8c60" Nov 24 11:48:27 crc kubenswrapper[4789]: E1124 11:48:27.212606 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b405c41e720d42db1a92ef71c2bdf5ee6a86620ae549c2adc9feb04167a8c60\": container with ID starting with 5b405c41e720d42db1a92ef71c2bdf5ee6a86620ae549c2adc9feb04167a8c60 not found: ID does not exist" containerID="5b405c41e720d42db1a92ef71c2bdf5ee6a86620ae549c2adc9feb04167a8c60" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.212639 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b405c41e720d42db1a92ef71c2bdf5ee6a86620ae549c2adc9feb04167a8c60"} err="failed to get container status \"5b405c41e720d42db1a92ef71c2bdf5ee6a86620ae549c2adc9feb04167a8c60\": rpc error: code = NotFound desc = could not find container \"5b405c41e720d42db1a92ef71c2bdf5ee6a86620ae549c2adc9feb04167a8c60\": container with ID starting with 5b405c41e720d42db1a92ef71c2bdf5ee6a86620ae549c2adc9feb04167a8c60 not found: ID does not exist" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.212660 4789 scope.go:117] "RemoveContainer" containerID="fcc3711273bf11a0edf0679fa48c65c0309e7ad0997e7b2295803810ca491ecd" Nov 24 11:48:27 crc kubenswrapper[4789]: E1124 11:48:27.218416 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcc3711273bf11a0edf0679fa48c65c0309e7ad0997e7b2295803810ca491ecd\": container with ID starting with fcc3711273bf11a0edf0679fa48c65c0309e7ad0997e7b2295803810ca491ecd not found: ID does not exist" containerID="fcc3711273bf11a0edf0679fa48c65c0309e7ad0997e7b2295803810ca491ecd" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.218451 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcc3711273bf11a0edf0679fa48c65c0309e7ad0997e7b2295803810ca491ecd"} err="failed to get container status \"fcc3711273bf11a0edf0679fa48c65c0309e7ad0997e7b2295803810ca491ecd\": rpc error: code = NotFound desc = could not find container \"fcc3711273bf11a0edf0679fa48c65c0309e7ad0997e7b2295803810ca491ecd\": container with ID starting with fcc3711273bf11a0edf0679fa48c65c0309e7ad0997e7b2295803810ca491ecd not found: ID does not exist" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.260611 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79e21ee2-c69a-4744-a817-50101e626dac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79e21ee2-c69a-4744-a817-50101e626dac" (UID: "79e21ee2-c69a-4744-a817-50101e626dac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.285350 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjjlf\" (UniqueName: \"kubernetes.io/projected/79e21ee2-c69a-4744-a817-50101e626dac-kube-api-access-pjjlf\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.285381 4789 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79e21ee2-c69a-4744-a817-50101e626dac-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.285393 4789 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/79e21ee2-c69a-4744-a817-50101e626dac-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.285405 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e21ee2-c69a-4744-a817-50101e626dac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.285416 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79e21ee2-c69a-4744-a817-50101e626dac-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.285427 4789 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79e21ee2-c69a-4744-a817-50101e626dac-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.292223 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79e21ee2-c69a-4744-a817-50101e626dac-config-data" (OuterVolumeSpecName: "config-data") pod "79e21ee2-c69a-4744-a817-50101e626dac" (UID: "79e21ee2-c69a-4744-a817-50101e626dac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.387275 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79e21ee2-c69a-4744-a817-50101e626dac-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.440566 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.447641 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.474470 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:48:27 crc kubenswrapper[4789]: E1124 11:48:27.476783 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e21ee2-c69a-4744-a817-50101e626dac" containerName="ceilometer-notification-agent" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.476840 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e21ee2-c69a-4744-a817-50101e626dac" containerName="ceilometer-notification-agent" Nov 24 11:48:27 crc kubenswrapper[4789]: E1124 11:48:27.476862 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e21ee2-c69a-4744-a817-50101e626dac" containerName="sg-core" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.476871 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e21ee2-c69a-4744-a817-50101e626dac" containerName="sg-core" Nov 24 11:48:27 crc kubenswrapper[4789]: E1124 11:48:27.476902 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e21ee2-c69a-4744-a817-50101e626dac" containerName="proxy-httpd" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.476910 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e21ee2-c69a-4744-a817-50101e626dac" containerName="proxy-httpd" Nov 24 11:48:27 crc kubenswrapper[4789]: E1124 11:48:27.476927 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e21ee2-c69a-4744-a817-50101e626dac" containerName="ceilometer-central-agent" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.476935 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e21ee2-c69a-4744-a817-50101e626dac" containerName="ceilometer-central-agent" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.477202 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="79e21ee2-c69a-4744-a817-50101e626dac" containerName="sg-core" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.477219 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="79e21ee2-c69a-4744-a817-50101e626dac" containerName="ceilometer-central-agent" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.477252 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="79e21ee2-c69a-4744-a817-50101e626dac" containerName="proxy-httpd" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.477270 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="79e21ee2-c69a-4744-a817-50101e626dac" containerName="ceilometer-notification-agent" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.479610 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.481255 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.482337 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.486490 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.589911 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4b50e92-a4f9-47fa-816a-3a1fb96ec247-config-data\") pod \"ceilometer-0\" (UID: \"f4b50e92-a4f9-47fa-816a-3a1fb96ec247\") " pod="openstack/ceilometer-0" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.589973 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wswf\" (UniqueName: \"kubernetes.io/projected/f4b50e92-a4f9-47fa-816a-3a1fb96ec247-kube-api-access-5wswf\") pod \"ceilometer-0\" (UID: \"f4b50e92-a4f9-47fa-816a-3a1fb96ec247\") " pod="openstack/ceilometer-0" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.590014 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f4b50e92-a4f9-47fa-816a-3a1fb96ec247-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f4b50e92-a4f9-47fa-816a-3a1fb96ec247\") " pod="openstack/ceilometer-0" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.590049 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4b50e92-a4f9-47fa-816a-3a1fb96ec247-scripts\") pod \"ceilometer-0\" (UID: \"f4b50e92-a4f9-47fa-816a-3a1fb96ec247\") " pod="openstack/ceilometer-0" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.590081 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4b50e92-a4f9-47fa-816a-3a1fb96ec247-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f4b50e92-a4f9-47fa-816a-3a1fb96ec247\") " pod="openstack/ceilometer-0" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.590171 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4b50e92-a4f9-47fa-816a-3a1fb96ec247-log-httpd\") pod \"ceilometer-0\" (UID: \"f4b50e92-a4f9-47fa-816a-3a1fb96ec247\") " pod="openstack/ceilometer-0" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.590230 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4b50e92-a4f9-47fa-816a-3a1fb96ec247-run-httpd\") pod \"ceilometer-0\" (UID: \"f4b50e92-a4f9-47fa-816a-3a1fb96ec247\") " pod="openstack/ceilometer-0" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.692138 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4b50e92-a4f9-47fa-816a-3a1fb96ec247-log-httpd\") pod \"ceilometer-0\" (UID: \"f4b50e92-a4f9-47fa-816a-3a1fb96ec247\") " pod="openstack/ceilometer-0" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.692523 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4b50e92-a4f9-47fa-816a-3a1fb96ec247-run-httpd\") pod \"ceilometer-0\" (UID: \"f4b50e92-a4f9-47fa-816a-3a1fb96ec247\") " pod="openstack/ceilometer-0" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.692616 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4b50e92-a4f9-47fa-816a-3a1fb96ec247-config-data\") pod \"ceilometer-0\" (UID: \"f4b50e92-a4f9-47fa-816a-3a1fb96ec247\") " pod="openstack/ceilometer-0" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.692658 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wswf\" (UniqueName: \"kubernetes.io/projected/f4b50e92-a4f9-47fa-816a-3a1fb96ec247-kube-api-access-5wswf\") pod \"ceilometer-0\" (UID: \"f4b50e92-a4f9-47fa-816a-3a1fb96ec247\") " pod="openstack/ceilometer-0" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.692685 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4b50e92-a4f9-47fa-816a-3a1fb96ec247-log-httpd\") pod \"ceilometer-0\" (UID: \"f4b50e92-a4f9-47fa-816a-3a1fb96ec247\") " pod="openstack/ceilometer-0" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.692702 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f4b50e92-a4f9-47fa-816a-3a1fb96ec247-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f4b50e92-a4f9-47fa-816a-3a1fb96ec247\") " pod="openstack/ceilometer-0" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.692766 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4b50e92-a4f9-47fa-816a-3a1fb96ec247-scripts\") pod \"ceilometer-0\" (UID: \"f4b50e92-a4f9-47fa-816a-3a1fb96ec247\") " pod="openstack/ceilometer-0" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.692802 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4b50e92-a4f9-47fa-816a-3a1fb96ec247-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f4b50e92-a4f9-47fa-816a-3a1fb96ec247\") " pod="openstack/ceilometer-0" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.692892 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4b50e92-a4f9-47fa-816a-3a1fb96ec247-run-httpd\") pod \"ceilometer-0\" (UID: \"f4b50e92-a4f9-47fa-816a-3a1fb96ec247\") " pod="openstack/ceilometer-0" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.698536 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f4b50e92-a4f9-47fa-816a-3a1fb96ec247-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f4b50e92-a4f9-47fa-816a-3a1fb96ec247\") " pod="openstack/ceilometer-0" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.698759 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4b50e92-a4f9-47fa-816a-3a1fb96ec247-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f4b50e92-a4f9-47fa-816a-3a1fb96ec247\") " pod="openstack/ceilometer-0" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.698913 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4b50e92-a4f9-47fa-816a-3a1fb96ec247-config-data\") pod \"ceilometer-0\" (UID: \"f4b50e92-a4f9-47fa-816a-3a1fb96ec247\") " pod="openstack/ceilometer-0" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.700196 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4b50e92-a4f9-47fa-816a-3a1fb96ec247-scripts\") pod \"ceilometer-0\" (UID: \"f4b50e92-a4f9-47fa-816a-3a1fb96ec247\") " pod="openstack/ceilometer-0" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.710698 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wswf\" (UniqueName: \"kubernetes.io/projected/f4b50e92-a4f9-47fa-816a-3a1fb96ec247-kube-api-access-5wswf\") pod \"ceilometer-0\" (UID: \"f4b50e92-a4f9-47fa-816a-3a1fb96ec247\") " pod="openstack/ceilometer-0" Nov 24 11:48:27 crc kubenswrapper[4789]: I1124 11:48:27.795101 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:48:28 crc kubenswrapper[4789]: I1124 11:48:28.184640 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79e21ee2-c69a-4744-a817-50101e626dac" path="/var/lib/kubelet/pods/79e21ee2-c69a-4744-a817-50101e626dac/volumes" Nov 24 11:48:28 crc kubenswrapper[4789]: I1124 11:48:28.305566 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:48:29 crc kubenswrapper[4789]: I1124 11:48:29.120977 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4b50e92-a4f9-47fa-816a-3a1fb96ec247","Type":"ContainerStarted","Data":"f67098998f5a9d567dbe4cbc128651b90b5d2d4f398e96701e71bbfcfaa0068c"} Nov 24 11:48:29 crc kubenswrapper[4789]: I1124 11:48:29.121299 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4b50e92-a4f9-47fa-816a-3a1fb96ec247","Type":"ContainerStarted","Data":"8e85a9a2036fa02ccc1c0d13e022623421f09868474e487eaf60b8fc565fcb02"} Nov 24 11:48:30 crc kubenswrapper[4789]: I1124 11:48:30.130376 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4b50e92-a4f9-47fa-816a-3a1fb96ec247","Type":"ContainerStarted","Data":"5c04a93eecd837d9a9e3433338187a510eed03aac51671a494f01fe70a061111"} Nov 24 11:48:30 crc kubenswrapper[4789]: I1124 11:48:30.540667 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.015502 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-b9k6d"] Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.019966 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-b9k6d" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.023138 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.024401 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.064631 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-b9k6d"] Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.158255 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a66c1b99-9164-4ade-a853-5696e0f21764-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-b9k6d\" (UID: \"a66c1b99-9164-4ade-a853-5696e0f21764\") " pod="openstack/nova-cell0-cell-mapping-b9k6d" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.161390 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a66c1b99-9164-4ade-a853-5696e0f21764-config-data\") pod \"nova-cell0-cell-mapping-b9k6d\" (UID: \"a66c1b99-9164-4ade-a853-5696e0f21764\") " pod="openstack/nova-cell0-cell-mapping-b9k6d" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.169585 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x2h8\" (UniqueName: \"kubernetes.io/projected/a66c1b99-9164-4ade-a853-5696e0f21764-kube-api-access-4x2h8\") pod \"nova-cell0-cell-mapping-b9k6d\" (UID: \"a66c1b99-9164-4ade-a853-5696e0f21764\") " pod="openstack/nova-cell0-cell-mapping-b9k6d" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.169983 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a66c1b99-9164-4ade-a853-5696e0f21764-scripts\") pod \"nova-cell0-cell-mapping-b9k6d\" (UID: \"a66c1b99-9164-4ade-a853-5696e0f21764\") " pod="openstack/nova-cell0-cell-mapping-b9k6d" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.173942 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.190288 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4b50e92-a4f9-47fa-816a-3a1fb96ec247","Type":"ContainerStarted","Data":"4b066efcd23513829170ff9fb66fa97f55f242c46ca5b0c92f0c373c979f4a3f"} Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.190438 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.207768 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.288765 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txw4x\" (UniqueName: \"kubernetes.io/projected/bcf1dca4-fb5d-47c3-a0be-3b0c349accf5-kube-api-access-txw4x\") pod \"nova-scheduler-0\" (UID: \"bcf1dca4-fb5d-47c3-a0be-3b0c349accf5\") " pod="openstack/nova-scheduler-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.289307 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcf1dca4-fb5d-47c3-a0be-3b0c349accf5-config-data\") pod \"nova-scheduler-0\" (UID: \"bcf1dca4-fb5d-47c3-a0be-3b0c349accf5\") " pod="openstack/nova-scheduler-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.289373 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x2h8\" (UniqueName: \"kubernetes.io/projected/a66c1b99-9164-4ade-a853-5696e0f21764-kube-api-access-4x2h8\") pod \"nova-cell0-cell-mapping-b9k6d\" (UID: \"a66c1b99-9164-4ade-a853-5696e0f21764\") " pod="openstack/nova-cell0-cell-mapping-b9k6d" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.289403 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a66c1b99-9164-4ade-a853-5696e0f21764-scripts\") pod \"nova-cell0-cell-mapping-b9k6d\" (UID: \"a66c1b99-9164-4ade-a853-5696e0f21764\") " pod="openstack/nova-cell0-cell-mapping-b9k6d" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.289448 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a66c1b99-9164-4ade-a853-5696e0f21764-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-b9k6d\" (UID: \"a66c1b99-9164-4ade-a853-5696e0f21764\") " pod="openstack/nova-cell0-cell-mapping-b9k6d" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.289511 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcf1dca4-fb5d-47c3-a0be-3b0c349accf5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bcf1dca4-fb5d-47c3-a0be-3b0c349accf5\") " pod="openstack/nova-scheduler-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.289553 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a66c1b99-9164-4ade-a853-5696e0f21764-config-data\") pod \"nova-cell0-cell-mapping-b9k6d\" (UID: \"a66c1b99-9164-4ade-a853-5696e0f21764\") " pod="openstack/nova-cell0-cell-mapping-b9k6d" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.291589 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.338118 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a66c1b99-9164-4ade-a853-5696e0f21764-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-b9k6d\" (UID: \"a66c1b99-9164-4ade-a853-5696e0f21764\") " pod="openstack/nova-cell0-cell-mapping-b9k6d" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.341232 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a66c1b99-9164-4ade-a853-5696e0f21764-scripts\") pod \"nova-cell0-cell-mapping-b9k6d\" (UID: \"a66c1b99-9164-4ade-a853-5696e0f21764\") " pod="openstack/nova-cell0-cell-mapping-b9k6d" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.359161 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.360808 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.366967 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.380134 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x2h8\" (UniqueName: \"kubernetes.io/projected/a66c1b99-9164-4ade-a853-5696e0f21764-kube-api-access-4x2h8\") pod \"nova-cell0-cell-mapping-b9k6d\" (UID: \"a66c1b99-9164-4ade-a853-5696e0f21764\") " pod="openstack/nova-cell0-cell-mapping-b9k6d" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.386555 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a66c1b99-9164-4ade-a853-5696e0f21764-config-data\") pod \"nova-cell0-cell-mapping-b9k6d\" (UID: \"a66c1b99-9164-4ade-a853-5696e0f21764\") " pod="openstack/nova-cell0-cell-mapping-b9k6d" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.392566 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj9hh\" (UniqueName: \"kubernetes.io/projected/6ef33760-b229-42f2-9197-57ff1a2d8d3b-kube-api-access-mj9hh\") pod \"nova-metadata-0\" (UID: \"6ef33760-b229-42f2-9197-57ff1a2d8d3b\") " pod="openstack/nova-metadata-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.392663 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ef33760-b229-42f2-9197-57ff1a2d8d3b-logs\") pod \"nova-metadata-0\" (UID: \"6ef33760-b229-42f2-9197-57ff1a2d8d3b\") " pod="openstack/nova-metadata-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.392687 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ef33760-b229-42f2-9197-57ff1a2d8d3b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6ef33760-b229-42f2-9197-57ff1a2d8d3b\") " pod="openstack/nova-metadata-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.392715 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcf1dca4-fb5d-47c3-a0be-3b0c349accf5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bcf1dca4-fb5d-47c3-a0be-3b0c349accf5\") " pod="openstack/nova-scheduler-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.392777 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txw4x\" (UniqueName: \"kubernetes.io/projected/bcf1dca4-fb5d-47c3-a0be-3b0c349accf5-kube-api-access-txw4x\") pod \"nova-scheduler-0\" (UID: \"bcf1dca4-fb5d-47c3-a0be-3b0c349accf5\") " pod="openstack/nova-scheduler-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.392795 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ef33760-b229-42f2-9197-57ff1a2d8d3b-config-data\") pod \"nova-metadata-0\" (UID: \"6ef33760-b229-42f2-9197-57ff1a2d8d3b\") " pod="openstack/nova-metadata-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.392828 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcf1dca4-fb5d-47c3-a0be-3b0c349accf5-config-data\") pod \"nova-scheduler-0\" (UID: \"bcf1dca4-fb5d-47c3-a0be-3b0c349accf5\") " pod="openstack/nova-scheduler-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.405447 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcf1dca4-fb5d-47c3-a0be-3b0c349accf5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bcf1dca4-fb5d-47c3-a0be-3b0c349accf5\") " pod="openstack/nova-scheduler-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.409851 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcf1dca4-fb5d-47c3-a0be-3b0c349accf5-config-data\") pod \"nova-scheduler-0\" (UID: \"bcf1dca4-fb5d-47c3-a0be-3b0c349accf5\") " pod="openstack/nova-scheduler-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.424658 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.432239 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txw4x\" (UniqueName: \"kubernetes.io/projected/bcf1dca4-fb5d-47c3-a0be-3b0c349accf5-kube-api-access-txw4x\") pod \"nova-scheduler-0\" (UID: \"bcf1dca4-fb5d-47c3-a0be-3b0c349accf5\") " pod="openstack/nova-scheduler-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.448781 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.476198 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.476348 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.487137 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.496421 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ef33760-b229-42f2-9197-57ff1a2d8d3b-logs\") pod \"nova-metadata-0\" (UID: \"6ef33760-b229-42f2-9197-57ff1a2d8d3b\") " pod="openstack/nova-metadata-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.496492 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ef33760-b229-42f2-9197-57ff1a2d8d3b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6ef33760-b229-42f2-9197-57ff1a2d8d3b\") " pod="openstack/nova-metadata-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.496552 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kctm\" (UniqueName: \"kubernetes.io/projected/0d771d30-09b4-484e-8421-cc33d10bc26a-kube-api-access-4kctm\") pod \"nova-api-0\" (UID: \"0d771d30-09b4-484e-8421-cc33d10bc26a\") " pod="openstack/nova-api-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.496600 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ef33760-b229-42f2-9197-57ff1a2d8d3b-config-data\") pod \"nova-metadata-0\" (UID: \"6ef33760-b229-42f2-9197-57ff1a2d8d3b\") " pod="openstack/nova-metadata-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.496651 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj9hh\" (UniqueName: \"kubernetes.io/projected/6ef33760-b229-42f2-9197-57ff1a2d8d3b-kube-api-access-mj9hh\") pod \"nova-metadata-0\" (UID: \"6ef33760-b229-42f2-9197-57ff1a2d8d3b\") " pod="openstack/nova-metadata-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.496679 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d771d30-09b4-484e-8421-cc33d10bc26a-logs\") pod \"nova-api-0\" (UID: \"0d771d30-09b4-484e-8421-cc33d10bc26a\") " pod="openstack/nova-api-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.496709 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d771d30-09b4-484e-8421-cc33d10bc26a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0d771d30-09b4-484e-8421-cc33d10bc26a\") " pod="openstack/nova-api-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.496728 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d771d30-09b4-484e-8421-cc33d10bc26a-config-data\") pod \"nova-api-0\" (UID: \"0d771d30-09b4-484e-8421-cc33d10bc26a\") " pod="openstack/nova-api-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.497611 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ef33760-b229-42f2-9197-57ff1a2d8d3b-logs\") pod \"nova-metadata-0\" (UID: \"6ef33760-b229-42f2-9197-57ff1a2d8d3b\") " pod="openstack/nova-metadata-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.517175 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-4dgpk"] Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.518619 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-4dgpk" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.524152 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ef33760-b229-42f2-9197-57ff1a2d8d3b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6ef33760-b229-42f2-9197-57ff1a2d8d3b\") " pod="openstack/nova-metadata-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.528793 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ef33760-b229-42f2-9197-57ff1a2d8d3b-config-data\") pod \"nova-metadata-0\" (UID: \"6ef33760-b229-42f2-9197-57ff1a2d8d3b\") " pod="openstack/nova-metadata-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.539080 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj9hh\" (UniqueName: \"kubernetes.io/projected/6ef33760-b229-42f2-9197-57ff1a2d8d3b-kube-api-access-mj9hh\") pod \"nova-metadata-0\" (UID: \"6ef33760-b229-42f2-9197-57ff1a2d8d3b\") " pod="openstack/nova-metadata-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.545565 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-4dgpk"] Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.574738 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.598962 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d771d30-09b4-484e-8421-cc33d10bc26a-logs\") pod \"nova-api-0\" (UID: \"0d771d30-09b4-484e-8421-cc33d10bc26a\") " pod="openstack/nova-api-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.599022 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/234d181f-edd2-40e2-9c4f-683c28176a4a-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-4dgpk\" (UID: \"234d181f-edd2-40e2-9c4f-683c28176a4a\") " pod="openstack/dnsmasq-dns-8b8cf6657-4dgpk" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.599051 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d771d30-09b4-484e-8421-cc33d10bc26a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0d771d30-09b4-484e-8421-cc33d10bc26a\") " pod="openstack/nova-api-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.599072 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/234d181f-edd2-40e2-9c4f-683c28176a4a-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-4dgpk\" (UID: \"234d181f-edd2-40e2-9c4f-683c28176a4a\") " pod="openstack/dnsmasq-dns-8b8cf6657-4dgpk" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.599090 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d771d30-09b4-484e-8421-cc33d10bc26a-config-data\") pod \"nova-api-0\" (UID: \"0d771d30-09b4-484e-8421-cc33d10bc26a\") " pod="openstack/nova-api-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.599143 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/234d181f-edd2-40e2-9c4f-683c28176a4a-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-4dgpk\" (UID: \"234d181f-edd2-40e2-9c4f-683c28176a4a\") " pod="openstack/dnsmasq-dns-8b8cf6657-4dgpk" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.599176 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kctm\" (UniqueName: \"kubernetes.io/projected/0d771d30-09b4-484e-8421-cc33d10bc26a-kube-api-access-4kctm\") pod \"nova-api-0\" (UID: \"0d771d30-09b4-484e-8421-cc33d10bc26a\") " pod="openstack/nova-api-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.599216 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/234d181f-edd2-40e2-9c4f-683c28176a4a-config\") pod \"dnsmasq-dns-8b8cf6657-4dgpk\" (UID: \"234d181f-edd2-40e2-9c4f-683c28176a4a\") " pod="openstack/dnsmasq-dns-8b8cf6657-4dgpk" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.599254 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtvj6\" (UniqueName: \"kubernetes.io/projected/234d181f-edd2-40e2-9c4f-683c28176a4a-kube-api-access-vtvj6\") pod \"dnsmasq-dns-8b8cf6657-4dgpk\" (UID: \"234d181f-edd2-40e2-9c4f-683c28176a4a\") " pod="openstack/dnsmasq-dns-8b8cf6657-4dgpk" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.600024 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d771d30-09b4-484e-8421-cc33d10bc26a-logs\") pod \"nova-api-0\" (UID: \"0d771d30-09b4-484e-8421-cc33d10bc26a\") " pod="openstack/nova-api-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.608824 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.609167 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d771d30-09b4-484e-8421-cc33d10bc26a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0d771d30-09b4-484e-8421-cc33d10bc26a\") " pod="openstack/nova-api-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.609960 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.618802 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.619507 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.628192 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d771d30-09b4-484e-8421-cc33d10bc26a-config-data\") pod \"nova-api-0\" (UID: \"0d771d30-09b4-484e-8421-cc33d10bc26a\") " pod="openstack/nova-api-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.656763 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kctm\" (UniqueName: \"kubernetes.io/projected/0d771d30-09b4-484e-8421-cc33d10bc26a-kube-api-access-4kctm\") pod \"nova-api-0\" (UID: \"0d771d30-09b4-484e-8421-cc33d10bc26a\") " pod="openstack/nova-api-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.661939 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-b9k6d" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.700384 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/234d181f-edd2-40e2-9c4f-683c28176a4a-config\") pod \"dnsmasq-dns-8b8cf6657-4dgpk\" (UID: \"234d181f-edd2-40e2-9c4f-683c28176a4a\") " pod="openstack/dnsmasq-dns-8b8cf6657-4dgpk" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.700448 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtvj6\" (UniqueName: \"kubernetes.io/projected/234d181f-edd2-40e2-9c4f-683c28176a4a-kube-api-access-vtvj6\") pod \"dnsmasq-dns-8b8cf6657-4dgpk\" (UID: \"234d181f-edd2-40e2-9c4f-683c28176a4a\") " pod="openstack/dnsmasq-dns-8b8cf6657-4dgpk" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.700522 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f527a2d4-6a1e-4c79-9437-a216f724aa62-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f527a2d4-6a1e-4c79-9437-a216f724aa62\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.700546 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/234d181f-edd2-40e2-9c4f-683c28176a4a-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-4dgpk\" (UID: \"234d181f-edd2-40e2-9c4f-683c28176a4a\") " pod="openstack/dnsmasq-dns-8b8cf6657-4dgpk" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.700571 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/234d181f-edd2-40e2-9c4f-683c28176a4a-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-4dgpk\" (UID: \"234d181f-edd2-40e2-9c4f-683c28176a4a\") " pod="openstack/dnsmasq-dns-8b8cf6657-4dgpk" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.700588 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f527a2d4-6a1e-4c79-9437-a216f724aa62-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f527a2d4-6a1e-4c79-9437-a216f724aa62\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.700603 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh2f7\" (UniqueName: \"kubernetes.io/projected/f527a2d4-6a1e-4c79-9437-a216f724aa62-kube-api-access-zh2f7\") pod \"nova-cell1-novncproxy-0\" (UID: \"f527a2d4-6a1e-4c79-9437-a216f724aa62\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.700716 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/234d181f-edd2-40e2-9c4f-683c28176a4a-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-4dgpk\" (UID: \"234d181f-edd2-40e2-9c4f-683c28176a4a\") " pod="openstack/dnsmasq-dns-8b8cf6657-4dgpk" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.701636 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/234d181f-edd2-40e2-9c4f-683c28176a4a-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-4dgpk\" (UID: \"234d181f-edd2-40e2-9c4f-683c28176a4a\") " pod="openstack/dnsmasq-dns-8b8cf6657-4dgpk" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.701787 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/234d181f-edd2-40e2-9c4f-683c28176a4a-config\") pod \"dnsmasq-dns-8b8cf6657-4dgpk\" (UID: \"234d181f-edd2-40e2-9c4f-683c28176a4a\") " pod="openstack/dnsmasq-dns-8b8cf6657-4dgpk" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.702159 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/234d181f-edd2-40e2-9c4f-683c28176a4a-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-4dgpk\" (UID: \"234d181f-edd2-40e2-9c4f-683c28176a4a\") " pod="openstack/dnsmasq-dns-8b8cf6657-4dgpk" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.702293 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/234d181f-edd2-40e2-9c4f-683c28176a4a-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-4dgpk\" (UID: \"234d181f-edd2-40e2-9c4f-683c28176a4a\") " pod="openstack/dnsmasq-dns-8b8cf6657-4dgpk" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.732179 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtvj6\" (UniqueName: \"kubernetes.io/projected/234d181f-edd2-40e2-9c4f-683c28176a4a-kube-api-access-vtvj6\") pod \"dnsmasq-dns-8b8cf6657-4dgpk\" (UID: \"234d181f-edd2-40e2-9c4f-683c28176a4a\") " pod="openstack/dnsmasq-dns-8b8cf6657-4dgpk" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.804744 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f527a2d4-6a1e-4c79-9437-a216f724aa62-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f527a2d4-6a1e-4c79-9437-a216f724aa62\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.805028 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f527a2d4-6a1e-4c79-9437-a216f724aa62-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f527a2d4-6a1e-4c79-9437-a216f724aa62\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.805047 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh2f7\" (UniqueName: \"kubernetes.io/projected/f527a2d4-6a1e-4c79-9437-a216f724aa62-kube-api-access-zh2f7\") pod \"nova-cell1-novncproxy-0\" (UID: \"f527a2d4-6a1e-4c79-9437-a216f724aa62\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.812678 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f527a2d4-6a1e-4c79-9437-a216f724aa62-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f527a2d4-6a1e-4c79-9437-a216f724aa62\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.813140 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f527a2d4-6a1e-4c79-9437-a216f724aa62-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f527a2d4-6a1e-4c79-9437-a216f724aa62\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.819135 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.824839 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh2f7\" (UniqueName: \"kubernetes.io/projected/f527a2d4-6a1e-4c79-9437-a216f724aa62-kube-api-access-zh2f7\") pod \"nova-cell1-novncproxy-0\" (UID: \"f527a2d4-6a1e-4c79-9437-a216f724aa62\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.840717 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.857101 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-4dgpk" Nov 24 11:48:31 crc kubenswrapper[4789]: I1124 11:48:31.952824 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:48:32 crc kubenswrapper[4789]: I1124 11:48:32.156632 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:48:32 crc kubenswrapper[4789]: I1124 11:48:32.320239 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-b9k6d"] Nov 24 11:48:32 crc kubenswrapper[4789]: I1124 11:48:32.467063 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:48:32 crc kubenswrapper[4789]: I1124 11:48:32.516912 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-4dgpk"] Nov 24 11:48:32 crc kubenswrapper[4789]: W1124 11:48:32.517586 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ef33760_b229_42f2_9197_57ff1a2d8d3b.slice/crio-38e325489592fe113355994c3ce78accdbe42f43fdc9bdf46bcc6bcc253ca229 WatchSource:0}: Error finding container 38e325489592fe113355994c3ce78accdbe42f43fdc9bdf46bcc6bcc253ca229: Status 404 returned error can't find the container with id 38e325489592fe113355994c3ce78accdbe42f43fdc9bdf46bcc6bcc253ca229 Nov 24 11:48:32 crc kubenswrapper[4789]: I1124 11:48:32.606431 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:48:32 crc kubenswrapper[4789]: I1124 11:48:32.771052 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 11:48:32 crc kubenswrapper[4789]: W1124 11:48:32.782861 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf527a2d4_6a1e_4c79_9437_a216f724aa62.slice/crio-7d6bd70e7fadc5bd0c02508d6b80b7d6056b1d1da5934c853a490c37982143c8 WatchSource:0}: Error finding container 7d6bd70e7fadc5bd0c02508d6b80b7d6056b1d1da5934c853a490c37982143c8: Status 404 returned error can't find the container with id 7d6bd70e7fadc5bd0c02508d6b80b7d6056b1d1da5934c853a490c37982143c8 Nov 24 11:48:32 crc kubenswrapper[4789]: I1124 11:48:32.907345 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-vvgpt"] Nov 24 11:48:32 crc kubenswrapper[4789]: I1124 11:48:32.908361 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-vvgpt" Nov 24 11:48:32 crc kubenswrapper[4789]: I1124 11:48:32.912674 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 24 11:48:32 crc kubenswrapper[4789]: I1124 11:48:32.913283 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 24 11:48:32 crc kubenswrapper[4789]: I1124 11:48:32.926163 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-vvgpt"] Nov 24 11:48:32 crc kubenswrapper[4789]: I1124 11:48:32.941160 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8ed866d-2fd1-4ad5-8cf0-6d8655144679-config-data\") pod \"nova-cell1-conductor-db-sync-vvgpt\" (UID: \"d8ed866d-2fd1-4ad5-8cf0-6d8655144679\") " pod="openstack/nova-cell1-conductor-db-sync-vvgpt" Nov 24 11:48:32 crc kubenswrapper[4789]: I1124 11:48:32.941214 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvhwt\" (UniqueName: \"kubernetes.io/projected/d8ed866d-2fd1-4ad5-8cf0-6d8655144679-kube-api-access-dvhwt\") pod \"nova-cell1-conductor-db-sync-vvgpt\" (UID: \"d8ed866d-2fd1-4ad5-8cf0-6d8655144679\") " pod="openstack/nova-cell1-conductor-db-sync-vvgpt" Nov 24 11:48:32 crc kubenswrapper[4789]: I1124 11:48:32.941270 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ed866d-2fd1-4ad5-8cf0-6d8655144679-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-vvgpt\" (UID: \"d8ed866d-2fd1-4ad5-8cf0-6d8655144679\") " pod="openstack/nova-cell1-conductor-db-sync-vvgpt" Nov 24 11:48:32 crc kubenswrapper[4789]: I1124 11:48:32.941307 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8ed866d-2fd1-4ad5-8cf0-6d8655144679-scripts\") pod \"nova-cell1-conductor-db-sync-vvgpt\" (UID: \"d8ed866d-2fd1-4ad5-8cf0-6d8655144679\") " pod="openstack/nova-cell1-conductor-db-sync-vvgpt" Nov 24 11:48:33 crc kubenswrapper[4789]: I1124 11:48:33.042248 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvhwt\" (UniqueName: \"kubernetes.io/projected/d8ed866d-2fd1-4ad5-8cf0-6d8655144679-kube-api-access-dvhwt\") pod \"nova-cell1-conductor-db-sync-vvgpt\" (UID: \"d8ed866d-2fd1-4ad5-8cf0-6d8655144679\") " pod="openstack/nova-cell1-conductor-db-sync-vvgpt" Nov 24 11:48:33 crc kubenswrapper[4789]: I1124 11:48:33.042328 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ed866d-2fd1-4ad5-8cf0-6d8655144679-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-vvgpt\" (UID: \"d8ed866d-2fd1-4ad5-8cf0-6d8655144679\") " pod="openstack/nova-cell1-conductor-db-sync-vvgpt" Nov 24 11:48:33 crc kubenswrapper[4789]: I1124 11:48:33.042362 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8ed866d-2fd1-4ad5-8cf0-6d8655144679-scripts\") pod \"nova-cell1-conductor-db-sync-vvgpt\" (UID: \"d8ed866d-2fd1-4ad5-8cf0-6d8655144679\") " pod="openstack/nova-cell1-conductor-db-sync-vvgpt" Nov 24 11:48:33 crc kubenswrapper[4789]: I1124 11:48:33.042436 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8ed866d-2fd1-4ad5-8cf0-6d8655144679-config-data\") pod \"nova-cell1-conductor-db-sync-vvgpt\" (UID: \"d8ed866d-2fd1-4ad5-8cf0-6d8655144679\") " pod="openstack/nova-cell1-conductor-db-sync-vvgpt" Nov 24 11:48:33 crc kubenswrapper[4789]: I1124 11:48:33.046683 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8ed866d-2fd1-4ad5-8cf0-6d8655144679-config-data\") pod \"nova-cell1-conductor-db-sync-vvgpt\" (UID: \"d8ed866d-2fd1-4ad5-8cf0-6d8655144679\") " pod="openstack/nova-cell1-conductor-db-sync-vvgpt" Nov 24 11:48:33 crc kubenswrapper[4789]: I1124 11:48:33.047980 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ed866d-2fd1-4ad5-8cf0-6d8655144679-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-vvgpt\" (UID: \"d8ed866d-2fd1-4ad5-8cf0-6d8655144679\") " pod="openstack/nova-cell1-conductor-db-sync-vvgpt" Nov 24 11:48:33 crc kubenswrapper[4789]: I1124 11:48:33.049937 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8ed866d-2fd1-4ad5-8cf0-6d8655144679-scripts\") pod \"nova-cell1-conductor-db-sync-vvgpt\" (UID: \"d8ed866d-2fd1-4ad5-8cf0-6d8655144679\") " pod="openstack/nova-cell1-conductor-db-sync-vvgpt" Nov 24 11:48:33 crc kubenswrapper[4789]: I1124 11:48:33.059511 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvhwt\" (UniqueName: \"kubernetes.io/projected/d8ed866d-2fd1-4ad5-8cf0-6d8655144679-kube-api-access-dvhwt\") pod \"nova-cell1-conductor-db-sync-vvgpt\" (UID: \"d8ed866d-2fd1-4ad5-8cf0-6d8655144679\") " pod="openstack/nova-cell1-conductor-db-sync-vvgpt" Nov 24 11:48:33 crc kubenswrapper[4789]: I1124 11:48:33.238577 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6ef33760-b229-42f2-9197-57ff1a2d8d3b","Type":"ContainerStarted","Data":"38e325489592fe113355994c3ce78accdbe42f43fdc9bdf46bcc6bcc253ca229"} Nov 24 11:48:33 crc kubenswrapper[4789]: I1124 11:48:33.244882 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4b50e92-a4f9-47fa-816a-3a1fb96ec247","Type":"ContainerStarted","Data":"e1dfa4b5aa44972c25dc9122e073a92fa02c8d1c3cce3503b2c6ea7c5bbc5a12"} Nov 24 11:48:33 crc kubenswrapper[4789]: I1124 11:48:33.244990 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 11:48:33 crc kubenswrapper[4789]: I1124 11:48:33.248054 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bcf1dca4-fb5d-47c3-a0be-3b0c349accf5","Type":"ContainerStarted","Data":"d3dd9e63fa4adafc168dd1035c77238c1245120fc4bed161d45b8fc395a4d547"} Nov 24 11:48:33 crc kubenswrapper[4789]: I1124 11:48:33.248733 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-vvgpt" Nov 24 11:48:33 crc kubenswrapper[4789]: I1124 11:48:33.251945 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-b9k6d" event={"ID":"a66c1b99-9164-4ade-a853-5696e0f21764","Type":"ContainerStarted","Data":"a4951fe682c84783cf01089d61af331b4d66eb9d9a32875c8c255605275094ef"} Nov 24 11:48:33 crc kubenswrapper[4789]: I1124 11:48:33.252134 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-b9k6d" event={"ID":"a66c1b99-9164-4ade-a853-5696e0f21764","Type":"ContainerStarted","Data":"456d9bac681a080ea48b3b61c72ddb1bc592bc6273498c1697eebccd96ef154d"} Nov 24 11:48:33 crc kubenswrapper[4789]: I1124 11:48:33.269770 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.298142074 podStartE2EDuration="6.269746744s" podCreationTimestamp="2025-11-24 11:48:27 +0000 UTC" firstStartedPulling="2025-11-24 11:48:28.308683206 +0000 UTC m=+1090.891154595" lastFinishedPulling="2025-11-24 11:48:32.280287886 +0000 UTC m=+1094.862759265" observedRunningTime="2025-11-24 11:48:33.263477782 +0000 UTC m=+1095.845949181" watchObservedRunningTime="2025-11-24 11:48:33.269746744 +0000 UTC m=+1095.852218123" Nov 24 11:48:33 crc kubenswrapper[4789]: I1124 11:48:33.292105 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f527a2d4-6a1e-4c79-9437-a216f724aa62","Type":"ContainerStarted","Data":"7d6bd70e7fadc5bd0c02508d6b80b7d6056b1d1da5934c853a490c37982143c8"} Nov 24 11:48:33 crc kubenswrapper[4789]: I1124 11:48:33.294689 4789 generic.go:334] "Generic (PLEG): container finished" podID="234d181f-edd2-40e2-9c4f-683c28176a4a" containerID="c5679323096f9ad30087ec4c4bae3cc84ec652c8f3b91f8c606c91d2ee81e7dd" exitCode=0 Nov 24 11:48:33 crc kubenswrapper[4789]: I1124 11:48:33.294806 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-4dgpk" event={"ID":"234d181f-edd2-40e2-9c4f-683c28176a4a","Type":"ContainerDied","Data":"c5679323096f9ad30087ec4c4bae3cc84ec652c8f3b91f8c606c91d2ee81e7dd"} Nov 24 11:48:33 crc kubenswrapper[4789]: I1124 11:48:33.294846 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-4dgpk" event={"ID":"234d181f-edd2-40e2-9c4f-683c28176a4a","Type":"ContainerStarted","Data":"155a2fcad1bc50f9667e21441db8285fb31354a68b3f4d92bc0eeb55b179f010"} Nov 24 11:48:33 crc kubenswrapper[4789]: I1124 11:48:33.321371 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0d771d30-09b4-484e-8421-cc33d10bc26a","Type":"ContainerStarted","Data":"c25071ced57fd1c28ecb88dcca712a54c3b7e114f99351fa4b2944b7059c40fb"} Nov 24 11:48:33 crc kubenswrapper[4789]: I1124 11:48:33.341133 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-b9k6d" podStartSLOduration=3.341109499 podStartE2EDuration="3.341109499s" podCreationTimestamp="2025-11-24 11:48:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:48:33.29192279 +0000 UTC m=+1095.874394169" watchObservedRunningTime="2025-11-24 11:48:33.341109499 +0000 UTC m=+1095.923580878" Nov 24 11:48:33 crc kubenswrapper[4789]: I1124 11:48:33.950806 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-vvgpt"] Nov 24 11:48:33 crc kubenswrapper[4789]: W1124 11:48:33.967832 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8ed866d_2fd1_4ad5_8cf0_6d8655144679.slice/crio-da03e6d2376c78f13cc67adf76b913935b8dacd319fb45e45a1810780f50de8c WatchSource:0}: Error finding container da03e6d2376c78f13cc67adf76b913935b8dacd319fb45e45a1810780f50de8c: Status 404 returned error can't find the container with id da03e6d2376c78f13cc67adf76b913935b8dacd319fb45e45a1810780f50de8c Nov 24 11:48:34 crc kubenswrapper[4789]: I1124 11:48:34.350212 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-vvgpt" event={"ID":"d8ed866d-2fd1-4ad5-8cf0-6d8655144679","Type":"ContainerStarted","Data":"da03e6d2376c78f13cc67adf76b913935b8dacd319fb45e45a1810780f50de8c"} Nov 24 11:48:34 crc kubenswrapper[4789]: I1124 11:48:34.365643 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-4dgpk" event={"ID":"234d181f-edd2-40e2-9c4f-683c28176a4a","Type":"ContainerStarted","Data":"007f3a8dce0bd7dfc3a683dfbc04b21b28fc2dade6a75ed6b12401eaa382ce0e"} Nov 24 11:48:34 crc kubenswrapper[4789]: I1124 11:48:34.366034 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b8cf6657-4dgpk" Nov 24 11:48:34 crc kubenswrapper[4789]: I1124 11:48:34.408219 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b8cf6657-4dgpk" podStartSLOduration=3.408201622 podStartE2EDuration="3.408201622s" podCreationTimestamp="2025-11-24 11:48:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:48:34.397780321 +0000 UTC m=+1096.980251700" watchObservedRunningTime="2025-11-24 11:48:34.408201622 +0000 UTC m=+1096.990672991" Nov 24 11:48:34 crc kubenswrapper[4789]: I1124 11:48:34.866026 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 11:48:34 crc kubenswrapper[4789]: I1124 11:48:34.874349 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:48:37 crc kubenswrapper[4789]: I1124 11:48:37.443831 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-vvgpt" event={"ID":"d8ed866d-2fd1-4ad5-8cf0-6d8655144679","Type":"ContainerStarted","Data":"ccbcbb0c6e21d1e6f997643b6f091b6f63af003868bb5c44dd222b6a7543d6b5"} Nov 24 11:48:37 crc kubenswrapper[4789]: I1124 11:48:37.455034 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0d771d30-09b4-484e-8421-cc33d10bc26a","Type":"ContainerStarted","Data":"627e6bb55411e8ef976d0e3e1a93afb24b5ea845a93f488da90a7c217d3ae43c"} Nov 24 11:48:37 crc kubenswrapper[4789]: I1124 11:48:37.455073 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0d771d30-09b4-484e-8421-cc33d10bc26a","Type":"ContainerStarted","Data":"d92c6a933bc93c96e3db231af4b8ad55c621d619c486534e8417fa737936f1ba"} Nov 24 11:48:37 crc kubenswrapper[4789]: I1124 11:48:37.467306 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6ef33760-b229-42f2-9197-57ff1a2d8d3b","Type":"ContainerStarted","Data":"4f0b64c953552ddf85371c735b109fa6d0c1abfef8cd839a5be7d7727cac7190"} Nov 24 11:48:37 crc kubenswrapper[4789]: I1124 11:48:37.467351 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6ef33760-b229-42f2-9197-57ff1a2d8d3b","Type":"ContainerStarted","Data":"652d0bbeeeede08cb76864fa96c5a9ead1089170e6c7ac445c4157b881e70a23"} Nov 24 11:48:37 crc kubenswrapper[4789]: I1124 11:48:37.467538 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6ef33760-b229-42f2-9197-57ff1a2d8d3b" containerName="nova-metadata-log" containerID="cri-o://652d0bbeeeede08cb76864fa96c5a9ead1089170e6c7ac445c4157b881e70a23" gracePeriod=30 Nov 24 11:48:37 crc kubenswrapper[4789]: I1124 11:48:37.467653 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6ef33760-b229-42f2-9197-57ff1a2d8d3b" containerName="nova-metadata-metadata" containerID="cri-o://4f0b64c953552ddf85371c735b109fa6d0c1abfef8cd839a5be7d7727cac7190" gracePeriod=30 Nov 24 11:48:37 crc kubenswrapper[4789]: I1124 11:48:37.475629 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bcf1dca4-fb5d-47c3-a0be-3b0c349accf5","Type":"ContainerStarted","Data":"a9d5b134385432ce36c34376bcf40d9a193bb97c200f38d56d81ce58a44ddc4c"} Nov 24 11:48:37 crc kubenswrapper[4789]: I1124 11:48:37.479666 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f527a2d4-6a1e-4c79-9437-a216f724aa62","Type":"ContainerStarted","Data":"0b2ab1943ef9ea8947b3f00c9cf370b38638585d7a847dab99d7251922b4d1f4"} Nov 24 11:48:37 crc kubenswrapper[4789]: I1124 11:48:37.479818 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="f527a2d4-6a1e-4c79-9437-a216f724aa62" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://0b2ab1943ef9ea8947b3f00c9cf370b38638585d7a847dab99d7251922b4d1f4" gracePeriod=30 Nov 24 11:48:37 crc kubenswrapper[4789]: I1124 11:48:37.487575 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-vvgpt" podStartSLOduration=5.487551395 podStartE2EDuration="5.487551395s" podCreationTimestamp="2025-11-24 11:48:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:48:37.481822207 +0000 UTC m=+1100.064293586" watchObservedRunningTime="2025-11-24 11:48:37.487551395 +0000 UTC m=+1100.070022774" Nov 24 11:48:37 crc kubenswrapper[4789]: I1124 11:48:37.524932 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.519288115 podStartE2EDuration="6.524911819s" podCreationTimestamp="2025-11-24 11:48:31 +0000 UTC" firstStartedPulling="2025-11-24 11:48:32.596226023 +0000 UTC m=+1095.178697402" lastFinishedPulling="2025-11-24 11:48:36.601849727 +0000 UTC m=+1099.184321106" observedRunningTime="2025-11-24 11:48:37.515655045 +0000 UTC m=+1100.098126434" watchObservedRunningTime="2025-11-24 11:48:37.524911819 +0000 UTC m=+1100.107383198" Nov 24 11:48:37 crc kubenswrapper[4789]: I1124 11:48:37.545019 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.495658824 podStartE2EDuration="6.544997804s" podCreationTimestamp="2025-11-24 11:48:31 +0000 UTC" firstStartedPulling="2025-11-24 11:48:32.522356728 +0000 UTC m=+1095.104828107" lastFinishedPulling="2025-11-24 11:48:36.571695708 +0000 UTC m=+1099.154167087" observedRunningTime="2025-11-24 11:48:37.540737821 +0000 UTC m=+1100.123209200" watchObservedRunningTime="2025-11-24 11:48:37.544997804 +0000 UTC m=+1100.127469183" Nov 24 11:48:37 crc kubenswrapper[4789]: I1124 11:48:37.556286 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.770590249 podStartE2EDuration="6.556266616s" podCreationTimestamp="2025-11-24 11:48:31 +0000 UTC" firstStartedPulling="2025-11-24 11:48:32.784677478 +0000 UTC m=+1095.367148857" lastFinishedPulling="2025-11-24 11:48:36.570353825 +0000 UTC m=+1099.152825224" observedRunningTime="2025-11-24 11:48:37.553816798 +0000 UTC m=+1100.136288177" watchObservedRunningTime="2025-11-24 11:48:37.556266616 +0000 UTC m=+1100.138737995" Nov 24 11:48:37 crc kubenswrapper[4789]: I1124 11:48:37.597699 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.245708313 podStartE2EDuration="6.597679728s" podCreationTimestamp="2025-11-24 11:48:31 +0000 UTC" firstStartedPulling="2025-11-24 11:48:32.220540303 +0000 UTC m=+1094.803011682" lastFinishedPulling="2025-11-24 11:48:36.572511718 +0000 UTC m=+1099.154983097" observedRunningTime="2025-11-24 11:48:37.571084955 +0000 UTC m=+1100.153556334" watchObservedRunningTime="2025-11-24 11:48:37.597679728 +0000 UTC m=+1100.180151107" Nov 24 11:48:38 crc kubenswrapper[4789]: I1124 11:48:38.495326 4789 generic.go:334] "Generic (PLEG): container finished" podID="6ef33760-b229-42f2-9197-57ff1a2d8d3b" containerID="652d0bbeeeede08cb76864fa96c5a9ead1089170e6c7ac445c4157b881e70a23" exitCode=143 Nov 24 11:48:38 crc kubenswrapper[4789]: I1124 11:48:38.495426 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6ef33760-b229-42f2-9197-57ff1a2d8d3b","Type":"ContainerDied","Data":"652d0bbeeeede08cb76864fa96c5a9ead1089170e6c7ac445c4157b881e70a23"} Nov 24 11:48:41 crc kubenswrapper[4789]: I1124 11:48:41.537327 4789 generic.go:334] "Generic (PLEG): container finished" podID="a66c1b99-9164-4ade-a853-5696e0f21764" containerID="a4951fe682c84783cf01089d61af331b4d66eb9d9a32875c8c255605275094ef" exitCode=0 Nov 24 11:48:41 crc kubenswrapper[4789]: I1124 11:48:41.537425 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-b9k6d" event={"ID":"a66c1b99-9164-4ade-a853-5696e0f21764","Type":"ContainerDied","Data":"a4951fe682c84783cf01089d61af331b4d66eb9d9a32875c8c255605275094ef"} Nov 24 11:48:41 crc kubenswrapper[4789]: I1124 11:48:41.576131 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 11:48:41 crc kubenswrapper[4789]: I1124 11:48:41.576352 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 11:48:41 crc kubenswrapper[4789]: I1124 11:48:41.614859 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 11:48:41 crc kubenswrapper[4789]: I1124 11:48:41.820281 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 11:48:41 crc kubenswrapper[4789]: I1124 11:48:41.820339 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 11:48:41 crc kubenswrapper[4789]: I1124 11:48:41.841648 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 11:48:41 crc kubenswrapper[4789]: I1124 11:48:41.841702 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 11:48:41 crc kubenswrapper[4789]: I1124 11:48:41.859706 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b8cf6657-4dgpk" Nov 24 11:48:41 crc kubenswrapper[4789]: I1124 11:48:41.921567 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-n5hqj"] Nov 24 11:48:41 crc kubenswrapper[4789]: I1124 11:48:41.921900 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58db5546cc-n5hqj" podUID="e6978127-8354-4009-af79-a96fc2e47c9f" containerName="dnsmasq-dns" containerID="cri-o://731a08aa876ccc98278e8a05f4f029074189ef51eb1166cea20d8102a20bd199" gracePeriod=10 Nov 24 11:48:41 crc kubenswrapper[4789]: I1124 11:48:41.954070 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:48:42 crc kubenswrapper[4789]: I1124 11:48:42.532836 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-n5hqj" Nov 24 11:48:42 crc kubenswrapper[4789]: I1124 11:48:42.547156 4789 generic.go:334] "Generic (PLEG): container finished" podID="e6978127-8354-4009-af79-a96fc2e47c9f" containerID="731a08aa876ccc98278e8a05f4f029074189ef51eb1166cea20d8102a20bd199" exitCode=0 Nov 24 11:48:42 crc kubenswrapper[4789]: I1124 11:48:42.547203 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-n5hqj" Nov 24 11:48:42 crc kubenswrapper[4789]: I1124 11:48:42.547226 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-n5hqj" event={"ID":"e6978127-8354-4009-af79-a96fc2e47c9f","Type":"ContainerDied","Data":"731a08aa876ccc98278e8a05f4f029074189ef51eb1166cea20d8102a20bd199"} Nov 24 11:48:42 crc kubenswrapper[4789]: I1124 11:48:42.547323 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-n5hqj" event={"ID":"e6978127-8354-4009-af79-a96fc2e47c9f","Type":"ContainerDied","Data":"9327d548d70dc6667fc17207e61a5e2744425ac0d79a91c386f141dd3beadeb4"} Nov 24 11:48:42 crc kubenswrapper[4789]: I1124 11:48:42.547371 4789 scope.go:117] "RemoveContainer" containerID="731a08aa876ccc98278e8a05f4f029074189ef51eb1166cea20d8102a20bd199" Nov 24 11:48:42 crc kubenswrapper[4789]: I1124 11:48:42.611755 4789 scope.go:117] "RemoveContainer" containerID="a80d58db6eec7cd185fbbe5474cdc5b42663d7ab243387a1d81a8df8a784063b" Nov 24 11:48:42 crc kubenswrapper[4789]: I1124 11:48:42.611906 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 11:48:42 crc kubenswrapper[4789]: I1124 11:48:42.648179 4789 scope.go:117] "RemoveContainer" containerID="731a08aa876ccc98278e8a05f4f029074189ef51eb1166cea20d8102a20bd199" Nov 24 11:48:42 crc kubenswrapper[4789]: E1124 11:48:42.648630 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"731a08aa876ccc98278e8a05f4f029074189ef51eb1166cea20d8102a20bd199\": container with ID starting with 731a08aa876ccc98278e8a05f4f029074189ef51eb1166cea20d8102a20bd199 not found: ID does not exist" containerID="731a08aa876ccc98278e8a05f4f029074189ef51eb1166cea20d8102a20bd199" Nov 24 11:48:42 crc kubenswrapper[4789]: I1124 11:48:42.648670 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"731a08aa876ccc98278e8a05f4f029074189ef51eb1166cea20d8102a20bd199"} err="failed to get container status \"731a08aa876ccc98278e8a05f4f029074189ef51eb1166cea20d8102a20bd199\": rpc error: code = NotFound desc = could not find container \"731a08aa876ccc98278e8a05f4f029074189ef51eb1166cea20d8102a20bd199\": container with ID starting with 731a08aa876ccc98278e8a05f4f029074189ef51eb1166cea20d8102a20bd199 not found: ID does not exist" Nov 24 11:48:42 crc kubenswrapper[4789]: I1124 11:48:42.648694 4789 scope.go:117] "RemoveContainer" containerID="a80d58db6eec7cd185fbbe5474cdc5b42663d7ab243387a1d81a8df8a784063b" Nov 24 11:48:42 crc kubenswrapper[4789]: E1124 11:48:42.649902 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a80d58db6eec7cd185fbbe5474cdc5b42663d7ab243387a1d81a8df8a784063b\": container with ID starting with a80d58db6eec7cd185fbbe5474cdc5b42663d7ab243387a1d81a8df8a784063b not found: ID does not exist" containerID="a80d58db6eec7cd185fbbe5474cdc5b42663d7ab243387a1d81a8df8a784063b" Nov 24 11:48:42 crc kubenswrapper[4789]: I1124 11:48:42.649928 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a80d58db6eec7cd185fbbe5474cdc5b42663d7ab243387a1d81a8df8a784063b"} err="failed to get container status \"a80d58db6eec7cd185fbbe5474cdc5b42663d7ab243387a1d81a8df8a784063b\": rpc error: code = NotFound desc = could not find container \"a80d58db6eec7cd185fbbe5474cdc5b42663d7ab243387a1d81a8df8a784063b\": container with ID starting with a80d58db6eec7cd185fbbe5474cdc5b42663d7ab243387a1d81a8df8a784063b not found: ID does not exist" Nov 24 11:48:42 crc kubenswrapper[4789]: I1124 11:48:42.650342 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6978127-8354-4009-af79-a96fc2e47c9f-ovsdbserver-nb\") pod \"e6978127-8354-4009-af79-a96fc2e47c9f\" (UID: \"e6978127-8354-4009-af79-a96fc2e47c9f\") " Nov 24 11:48:42 crc kubenswrapper[4789]: I1124 11:48:42.650659 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cj5rs\" (UniqueName: \"kubernetes.io/projected/e6978127-8354-4009-af79-a96fc2e47c9f-kube-api-access-cj5rs\") pod \"e6978127-8354-4009-af79-a96fc2e47c9f\" (UID: \"e6978127-8354-4009-af79-a96fc2e47c9f\") " Nov 24 11:48:42 crc kubenswrapper[4789]: I1124 11:48:42.650694 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6978127-8354-4009-af79-a96fc2e47c9f-ovsdbserver-sb\") pod \"e6978127-8354-4009-af79-a96fc2e47c9f\" (UID: \"e6978127-8354-4009-af79-a96fc2e47c9f\") " Nov 24 11:48:42 crc kubenswrapper[4789]: I1124 11:48:42.650848 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6978127-8354-4009-af79-a96fc2e47c9f-config\") pod \"e6978127-8354-4009-af79-a96fc2e47c9f\" (UID: \"e6978127-8354-4009-af79-a96fc2e47c9f\") " Nov 24 11:48:42 crc kubenswrapper[4789]: I1124 11:48:42.650878 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6978127-8354-4009-af79-a96fc2e47c9f-dns-svc\") pod \"e6978127-8354-4009-af79-a96fc2e47c9f\" (UID: \"e6978127-8354-4009-af79-a96fc2e47c9f\") " Nov 24 11:48:42 crc kubenswrapper[4789]: I1124 11:48:42.665696 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6978127-8354-4009-af79-a96fc2e47c9f-kube-api-access-cj5rs" (OuterVolumeSpecName: "kube-api-access-cj5rs") pod "e6978127-8354-4009-af79-a96fc2e47c9f" (UID: "e6978127-8354-4009-af79-a96fc2e47c9f"). InnerVolumeSpecName "kube-api-access-cj5rs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:48:42 crc kubenswrapper[4789]: I1124 11:48:42.711474 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6978127-8354-4009-af79-a96fc2e47c9f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e6978127-8354-4009-af79-a96fc2e47c9f" (UID: "e6978127-8354-4009-af79-a96fc2e47c9f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:48:42 crc kubenswrapper[4789]: I1124 11:48:42.714183 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6978127-8354-4009-af79-a96fc2e47c9f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e6978127-8354-4009-af79-a96fc2e47c9f" (UID: "e6978127-8354-4009-af79-a96fc2e47c9f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:48:42 crc kubenswrapper[4789]: I1124 11:48:42.718694 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6978127-8354-4009-af79-a96fc2e47c9f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e6978127-8354-4009-af79-a96fc2e47c9f" (UID: "e6978127-8354-4009-af79-a96fc2e47c9f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:48:42 crc kubenswrapper[4789]: I1124 11:48:42.722356 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6978127-8354-4009-af79-a96fc2e47c9f-config" (OuterVolumeSpecName: "config") pod "e6978127-8354-4009-af79-a96fc2e47c9f" (UID: "e6978127-8354-4009-af79-a96fc2e47c9f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:48:42 crc kubenswrapper[4789]: I1124 11:48:42.759748 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cj5rs\" (UniqueName: \"kubernetes.io/projected/e6978127-8354-4009-af79-a96fc2e47c9f-kube-api-access-cj5rs\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:42 crc kubenswrapper[4789]: I1124 11:48:42.759790 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6978127-8354-4009-af79-a96fc2e47c9f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:42 crc kubenswrapper[4789]: I1124 11:48:42.759804 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6978127-8354-4009-af79-a96fc2e47c9f-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:42 crc kubenswrapper[4789]: I1124 11:48:42.759818 4789 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6978127-8354-4009-af79-a96fc2e47c9f-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:42 crc kubenswrapper[4789]: I1124 11:48:42.759829 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6978127-8354-4009-af79-a96fc2e47c9f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:42 crc kubenswrapper[4789]: I1124 11:48:42.897211 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-n5hqj"] Nov 24 11:48:42 crc kubenswrapper[4789]: I1124 11:48:42.903871 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-n5hqj"] Nov 24 11:48:42 crc kubenswrapper[4789]: I1124 11:48:42.905352 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-b9k6d" Nov 24 11:48:42 crc kubenswrapper[4789]: I1124 11:48:42.923639 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0d771d30-09b4-484e-8421-cc33d10bc26a" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.171:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 11:48:42 crc kubenswrapper[4789]: I1124 11:48:42.923675 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0d771d30-09b4-484e-8421-cc33d10bc26a" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.171:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 11:48:42 crc kubenswrapper[4789]: I1124 11:48:42.964094 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a66c1b99-9164-4ade-a853-5696e0f21764-config-data\") pod \"a66c1b99-9164-4ade-a853-5696e0f21764\" (UID: \"a66c1b99-9164-4ade-a853-5696e0f21764\") " Nov 24 11:48:42 crc kubenswrapper[4789]: I1124 11:48:42.964165 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4x2h8\" (UniqueName: \"kubernetes.io/projected/a66c1b99-9164-4ade-a853-5696e0f21764-kube-api-access-4x2h8\") pod \"a66c1b99-9164-4ade-a853-5696e0f21764\" (UID: \"a66c1b99-9164-4ade-a853-5696e0f21764\") " Nov 24 11:48:42 crc kubenswrapper[4789]: I1124 11:48:42.964265 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a66c1b99-9164-4ade-a853-5696e0f21764-combined-ca-bundle\") pod \"a66c1b99-9164-4ade-a853-5696e0f21764\" (UID: \"a66c1b99-9164-4ade-a853-5696e0f21764\") " Nov 24 11:48:42 crc kubenswrapper[4789]: I1124 11:48:42.964318 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a66c1b99-9164-4ade-a853-5696e0f21764-scripts\") pod \"a66c1b99-9164-4ade-a853-5696e0f21764\" (UID: \"a66c1b99-9164-4ade-a853-5696e0f21764\") " Nov 24 11:48:42 crc kubenswrapper[4789]: I1124 11:48:42.968541 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a66c1b99-9164-4ade-a853-5696e0f21764-kube-api-access-4x2h8" (OuterVolumeSpecName: "kube-api-access-4x2h8") pod "a66c1b99-9164-4ade-a853-5696e0f21764" (UID: "a66c1b99-9164-4ade-a853-5696e0f21764"). InnerVolumeSpecName "kube-api-access-4x2h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:48:42 crc kubenswrapper[4789]: I1124 11:48:42.968570 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a66c1b99-9164-4ade-a853-5696e0f21764-scripts" (OuterVolumeSpecName: "scripts") pod "a66c1b99-9164-4ade-a853-5696e0f21764" (UID: "a66c1b99-9164-4ade-a853-5696e0f21764"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:48:42 crc kubenswrapper[4789]: I1124 11:48:42.995804 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a66c1b99-9164-4ade-a853-5696e0f21764-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a66c1b99-9164-4ade-a853-5696e0f21764" (UID: "a66c1b99-9164-4ade-a853-5696e0f21764"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:48:43 crc kubenswrapper[4789]: I1124 11:48:43.002662 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a66c1b99-9164-4ade-a853-5696e0f21764-config-data" (OuterVolumeSpecName: "config-data") pod "a66c1b99-9164-4ade-a853-5696e0f21764" (UID: "a66c1b99-9164-4ade-a853-5696e0f21764"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:48:43 crc kubenswrapper[4789]: I1124 11:48:43.067605 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4x2h8\" (UniqueName: \"kubernetes.io/projected/a66c1b99-9164-4ade-a853-5696e0f21764-kube-api-access-4x2h8\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:43 crc kubenswrapper[4789]: I1124 11:48:43.067649 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a66c1b99-9164-4ade-a853-5696e0f21764-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:43 crc kubenswrapper[4789]: I1124 11:48:43.067665 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a66c1b99-9164-4ade-a853-5696e0f21764-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:43 crc kubenswrapper[4789]: I1124 11:48:43.067683 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a66c1b99-9164-4ade-a853-5696e0f21764-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:43 crc kubenswrapper[4789]: I1124 11:48:43.555538 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-b9k6d" event={"ID":"a66c1b99-9164-4ade-a853-5696e0f21764","Type":"ContainerDied","Data":"456d9bac681a080ea48b3b61c72ddb1bc592bc6273498c1697eebccd96ef154d"} Nov 24 11:48:43 crc kubenswrapper[4789]: I1124 11:48:43.555575 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-b9k6d" Nov 24 11:48:43 crc kubenswrapper[4789]: I1124 11:48:43.555590 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="456d9bac681a080ea48b3b61c72ddb1bc592bc6273498c1697eebccd96ef154d" Nov 24 11:48:43 crc kubenswrapper[4789]: I1124 11:48:43.751038 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:48:43 crc kubenswrapper[4789]: I1124 11:48:43.751605 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0d771d30-09b4-484e-8421-cc33d10bc26a" containerName="nova-api-log" containerID="cri-o://d92c6a933bc93c96e3db231af4b8ad55c621d619c486534e8417fa737936f1ba" gracePeriod=30 Nov 24 11:48:43 crc kubenswrapper[4789]: I1124 11:48:43.751670 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0d771d30-09b4-484e-8421-cc33d10bc26a" containerName="nova-api-api" containerID="cri-o://627e6bb55411e8ef976d0e3e1a93afb24b5ea845a93f488da90a7c217d3ae43c" gracePeriod=30 Nov 24 11:48:43 crc kubenswrapper[4789]: I1124 11:48:43.760419 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:48:44 crc kubenswrapper[4789]: I1124 11:48:44.180331 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6978127-8354-4009-af79-a96fc2e47c9f" path="/var/lib/kubelet/pods/e6978127-8354-4009-af79-a96fc2e47c9f/volumes" Nov 24 11:48:44 crc kubenswrapper[4789]: I1124 11:48:44.566093 4789 generic.go:334] "Generic (PLEG): container finished" podID="0d771d30-09b4-484e-8421-cc33d10bc26a" containerID="d92c6a933bc93c96e3db231af4b8ad55c621d619c486534e8417fa737936f1ba" exitCode=143 Nov 24 11:48:44 crc kubenswrapper[4789]: I1124 11:48:44.566735 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0d771d30-09b4-484e-8421-cc33d10bc26a","Type":"ContainerDied","Data":"d92c6a933bc93c96e3db231af4b8ad55c621d619c486534e8417fa737936f1ba"} Nov 24 11:48:45 crc kubenswrapper[4789]: I1124 11:48:45.576997 4789 generic.go:334] "Generic (PLEG): container finished" podID="d8ed866d-2fd1-4ad5-8cf0-6d8655144679" containerID="ccbcbb0c6e21d1e6f997643b6f091b6f63af003868bb5c44dd222b6a7543d6b5" exitCode=0 Nov 24 11:48:45 crc kubenswrapper[4789]: I1124 11:48:45.577146 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-vvgpt" event={"ID":"d8ed866d-2fd1-4ad5-8cf0-6d8655144679","Type":"ContainerDied","Data":"ccbcbb0c6e21d1e6f997643b6f091b6f63af003868bb5c44dd222b6a7543d6b5"} Nov 24 11:48:45 crc kubenswrapper[4789]: I1124 11:48:45.577195 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="bcf1dca4-fb5d-47c3-a0be-3b0c349accf5" containerName="nova-scheduler-scheduler" containerID="cri-o://a9d5b134385432ce36c34376bcf40d9a193bb97c200f38d56d81ce58a44ddc4c" gracePeriod=30 Nov 24 11:48:46 crc kubenswrapper[4789]: E1124 11:48:46.578293 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a9d5b134385432ce36c34376bcf40d9a193bb97c200f38d56d81ce58a44ddc4c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 11:48:46 crc kubenswrapper[4789]: E1124 11:48:46.580230 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a9d5b134385432ce36c34376bcf40d9a193bb97c200f38d56d81ce58a44ddc4c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 11:48:46 crc kubenswrapper[4789]: E1124 11:48:46.581721 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a9d5b134385432ce36c34376bcf40d9a193bb97c200f38d56d81ce58a44ddc4c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 11:48:46 crc kubenswrapper[4789]: E1124 11:48:46.581804 4789 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="bcf1dca4-fb5d-47c3-a0be-3b0c349accf5" containerName="nova-scheduler-scheduler" Nov 24 11:48:46 crc kubenswrapper[4789]: I1124 11:48:46.891378 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-vvgpt" Nov 24 11:48:47 crc kubenswrapper[4789]: I1124 11:48:47.059243 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ed866d-2fd1-4ad5-8cf0-6d8655144679-combined-ca-bundle\") pod \"d8ed866d-2fd1-4ad5-8cf0-6d8655144679\" (UID: \"d8ed866d-2fd1-4ad5-8cf0-6d8655144679\") " Nov 24 11:48:47 crc kubenswrapper[4789]: I1124 11:48:47.059436 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8ed866d-2fd1-4ad5-8cf0-6d8655144679-config-data\") pod \"d8ed866d-2fd1-4ad5-8cf0-6d8655144679\" (UID: \"d8ed866d-2fd1-4ad5-8cf0-6d8655144679\") " Nov 24 11:48:47 crc kubenswrapper[4789]: I1124 11:48:47.059496 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8ed866d-2fd1-4ad5-8cf0-6d8655144679-scripts\") pod \"d8ed866d-2fd1-4ad5-8cf0-6d8655144679\" (UID: \"d8ed866d-2fd1-4ad5-8cf0-6d8655144679\") " Nov 24 11:48:47 crc kubenswrapper[4789]: I1124 11:48:47.059536 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvhwt\" (UniqueName: \"kubernetes.io/projected/d8ed866d-2fd1-4ad5-8cf0-6d8655144679-kube-api-access-dvhwt\") pod \"d8ed866d-2fd1-4ad5-8cf0-6d8655144679\" (UID: \"d8ed866d-2fd1-4ad5-8cf0-6d8655144679\") " Nov 24 11:48:47 crc kubenswrapper[4789]: I1124 11:48:47.065504 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8ed866d-2fd1-4ad5-8cf0-6d8655144679-kube-api-access-dvhwt" (OuterVolumeSpecName: "kube-api-access-dvhwt") pod "d8ed866d-2fd1-4ad5-8cf0-6d8655144679" (UID: "d8ed866d-2fd1-4ad5-8cf0-6d8655144679"). InnerVolumeSpecName "kube-api-access-dvhwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:48:47 crc kubenswrapper[4789]: I1124 11:48:47.068638 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8ed866d-2fd1-4ad5-8cf0-6d8655144679-scripts" (OuterVolumeSpecName: "scripts") pod "d8ed866d-2fd1-4ad5-8cf0-6d8655144679" (UID: "d8ed866d-2fd1-4ad5-8cf0-6d8655144679"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:48:47 crc kubenswrapper[4789]: I1124 11:48:47.086268 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8ed866d-2fd1-4ad5-8cf0-6d8655144679-config-data" (OuterVolumeSpecName: "config-data") pod "d8ed866d-2fd1-4ad5-8cf0-6d8655144679" (UID: "d8ed866d-2fd1-4ad5-8cf0-6d8655144679"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:48:47 crc kubenswrapper[4789]: I1124 11:48:47.088180 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8ed866d-2fd1-4ad5-8cf0-6d8655144679-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8ed866d-2fd1-4ad5-8cf0-6d8655144679" (UID: "d8ed866d-2fd1-4ad5-8cf0-6d8655144679"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:48:47 crc kubenswrapper[4789]: I1124 11:48:47.162101 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ed866d-2fd1-4ad5-8cf0-6d8655144679-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:47 crc kubenswrapper[4789]: I1124 11:48:47.162136 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8ed866d-2fd1-4ad5-8cf0-6d8655144679-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:47 crc kubenswrapper[4789]: I1124 11:48:47.162146 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8ed866d-2fd1-4ad5-8cf0-6d8655144679-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:47 crc kubenswrapper[4789]: I1124 11:48:47.162157 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvhwt\" (UniqueName: \"kubernetes.io/projected/d8ed866d-2fd1-4ad5-8cf0-6d8655144679-kube-api-access-dvhwt\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:47 crc kubenswrapper[4789]: I1124 11:48:47.608816 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-vvgpt" event={"ID":"d8ed866d-2fd1-4ad5-8cf0-6d8655144679","Type":"ContainerDied","Data":"da03e6d2376c78f13cc67adf76b913935b8dacd319fb45e45a1810780f50de8c"} Nov 24 11:48:47 crc kubenswrapper[4789]: I1124 11:48:47.609151 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da03e6d2376c78f13cc67adf76b913935b8dacd319fb45e45a1810780f50de8c" Nov 24 11:48:47 crc kubenswrapper[4789]: I1124 11:48:47.608886 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-vvgpt" Nov 24 11:48:47 crc kubenswrapper[4789]: I1124 11:48:47.664300 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 11:48:47 crc kubenswrapper[4789]: E1124 11:48:47.664905 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6978127-8354-4009-af79-a96fc2e47c9f" containerName="dnsmasq-dns" Nov 24 11:48:47 crc kubenswrapper[4789]: I1124 11:48:47.664980 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6978127-8354-4009-af79-a96fc2e47c9f" containerName="dnsmasq-dns" Nov 24 11:48:47 crc kubenswrapper[4789]: E1124 11:48:47.665041 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6978127-8354-4009-af79-a96fc2e47c9f" containerName="init" Nov 24 11:48:47 crc kubenswrapper[4789]: I1124 11:48:47.665146 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6978127-8354-4009-af79-a96fc2e47c9f" containerName="init" Nov 24 11:48:47 crc kubenswrapper[4789]: E1124 11:48:47.665230 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8ed866d-2fd1-4ad5-8cf0-6d8655144679" containerName="nova-cell1-conductor-db-sync" Nov 24 11:48:47 crc kubenswrapper[4789]: I1124 11:48:47.665299 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8ed866d-2fd1-4ad5-8cf0-6d8655144679" containerName="nova-cell1-conductor-db-sync" Nov 24 11:48:47 crc kubenswrapper[4789]: E1124 11:48:47.665379 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a66c1b99-9164-4ade-a853-5696e0f21764" containerName="nova-manage" Nov 24 11:48:47 crc kubenswrapper[4789]: I1124 11:48:47.665450 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="a66c1b99-9164-4ade-a853-5696e0f21764" containerName="nova-manage" Nov 24 11:48:47 crc kubenswrapper[4789]: I1124 11:48:47.665747 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8ed866d-2fd1-4ad5-8cf0-6d8655144679" containerName="nova-cell1-conductor-db-sync" Nov 24 11:48:47 crc kubenswrapper[4789]: I1124 11:48:47.665837 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6978127-8354-4009-af79-a96fc2e47c9f" containerName="dnsmasq-dns" Nov 24 11:48:47 crc kubenswrapper[4789]: I1124 11:48:47.665917 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="a66c1b99-9164-4ade-a853-5696e0f21764" containerName="nova-manage" Nov 24 11:48:47 crc kubenswrapper[4789]: I1124 11:48:47.666721 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 24 11:48:47 crc kubenswrapper[4789]: I1124 11:48:47.669199 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51167964-7234-4713-aef7-4f75548e9040-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"51167964-7234-4713-aef7-4f75548e9040\") " pod="openstack/nova-cell1-conductor-0" Nov 24 11:48:47 crc kubenswrapper[4789]: I1124 11:48:47.669353 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdqx6\" (UniqueName: \"kubernetes.io/projected/51167964-7234-4713-aef7-4f75548e9040-kube-api-access-bdqx6\") pod \"nova-cell1-conductor-0\" (UID: \"51167964-7234-4713-aef7-4f75548e9040\") " pod="openstack/nova-cell1-conductor-0" Nov 24 11:48:47 crc kubenswrapper[4789]: I1124 11:48:47.669480 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51167964-7234-4713-aef7-4f75548e9040-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"51167964-7234-4713-aef7-4f75548e9040\") " pod="openstack/nova-cell1-conductor-0" Nov 24 11:48:47 crc kubenswrapper[4789]: I1124 11:48:47.669779 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 24 11:48:47 crc kubenswrapper[4789]: I1124 11:48:47.674227 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 11:48:47 crc kubenswrapper[4789]: I1124 11:48:47.770981 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51167964-7234-4713-aef7-4f75548e9040-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"51167964-7234-4713-aef7-4f75548e9040\") " pod="openstack/nova-cell1-conductor-0" Nov 24 11:48:47 crc kubenswrapper[4789]: I1124 11:48:47.771021 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51167964-7234-4713-aef7-4f75548e9040-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"51167964-7234-4713-aef7-4f75548e9040\") " pod="openstack/nova-cell1-conductor-0" Nov 24 11:48:47 crc kubenswrapper[4789]: I1124 11:48:47.771223 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdqx6\" (UniqueName: \"kubernetes.io/projected/51167964-7234-4713-aef7-4f75548e9040-kube-api-access-bdqx6\") pod \"nova-cell1-conductor-0\" (UID: \"51167964-7234-4713-aef7-4f75548e9040\") " pod="openstack/nova-cell1-conductor-0" Nov 24 11:48:47 crc kubenswrapper[4789]: I1124 11:48:47.775649 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51167964-7234-4713-aef7-4f75548e9040-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"51167964-7234-4713-aef7-4f75548e9040\") " pod="openstack/nova-cell1-conductor-0" Nov 24 11:48:47 crc kubenswrapper[4789]: I1124 11:48:47.779878 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51167964-7234-4713-aef7-4f75548e9040-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"51167964-7234-4713-aef7-4f75548e9040\") " pod="openstack/nova-cell1-conductor-0" Nov 24 11:48:47 crc kubenswrapper[4789]: I1124 11:48:47.801732 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdqx6\" (UniqueName: \"kubernetes.io/projected/51167964-7234-4713-aef7-4f75548e9040-kube-api-access-bdqx6\") pod \"nova-cell1-conductor-0\" (UID: \"51167964-7234-4713-aef7-4f75548e9040\") " pod="openstack/nova-cell1-conductor-0" Nov 24 11:48:47 crc kubenswrapper[4789]: I1124 11:48:47.983322 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 24 11:48:48 crc kubenswrapper[4789]: I1124 11:48:48.488004 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 11:48:48 crc kubenswrapper[4789]: W1124 11:48:48.517090 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51167964_7234_4713_aef7_4f75548e9040.slice/crio-0f65b7f6c26fe4d2484587b7941d03d60627c05d1be922d8eb1fdc1a10989f34 WatchSource:0}: Error finding container 0f65b7f6c26fe4d2484587b7941d03d60627c05d1be922d8eb1fdc1a10989f34: Status 404 returned error can't find the container with id 0f65b7f6c26fe4d2484587b7941d03d60627c05d1be922d8eb1fdc1a10989f34 Nov 24 11:48:48 crc kubenswrapper[4789]: I1124 11:48:48.623455 4789 generic.go:334] "Generic (PLEG): container finished" podID="bcf1dca4-fb5d-47c3-a0be-3b0c349accf5" containerID="a9d5b134385432ce36c34376bcf40d9a193bb97c200f38d56d81ce58a44ddc4c" exitCode=0 Nov 24 11:48:48 crc kubenswrapper[4789]: I1124 11:48:48.623542 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bcf1dca4-fb5d-47c3-a0be-3b0c349accf5","Type":"ContainerDied","Data":"a9d5b134385432ce36c34376bcf40d9a193bb97c200f38d56d81ce58a44ddc4c"} Nov 24 11:48:48 crc kubenswrapper[4789]: I1124 11:48:48.624645 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"51167964-7234-4713-aef7-4f75548e9040","Type":"ContainerStarted","Data":"0f65b7f6c26fe4d2484587b7941d03d60627c05d1be922d8eb1fdc1a10989f34"} Nov 24 11:48:48 crc kubenswrapper[4789]: I1124 11:48:48.735105 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 11:48:48 crc kubenswrapper[4789]: I1124 11:48:48.892602 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcf1dca4-fb5d-47c3-a0be-3b0c349accf5-combined-ca-bundle\") pod \"bcf1dca4-fb5d-47c3-a0be-3b0c349accf5\" (UID: \"bcf1dca4-fb5d-47c3-a0be-3b0c349accf5\") " Nov 24 11:48:48 crc kubenswrapper[4789]: I1124 11:48:48.892708 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcf1dca4-fb5d-47c3-a0be-3b0c349accf5-config-data\") pod \"bcf1dca4-fb5d-47c3-a0be-3b0c349accf5\" (UID: \"bcf1dca4-fb5d-47c3-a0be-3b0c349accf5\") " Nov 24 11:48:48 crc kubenswrapper[4789]: I1124 11:48:48.892878 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txw4x\" (UniqueName: \"kubernetes.io/projected/bcf1dca4-fb5d-47c3-a0be-3b0c349accf5-kube-api-access-txw4x\") pod \"bcf1dca4-fb5d-47c3-a0be-3b0c349accf5\" (UID: \"bcf1dca4-fb5d-47c3-a0be-3b0c349accf5\") " Nov 24 11:48:48 crc kubenswrapper[4789]: I1124 11:48:48.900688 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcf1dca4-fb5d-47c3-a0be-3b0c349accf5-kube-api-access-txw4x" (OuterVolumeSpecName: "kube-api-access-txw4x") pod "bcf1dca4-fb5d-47c3-a0be-3b0c349accf5" (UID: "bcf1dca4-fb5d-47c3-a0be-3b0c349accf5"). InnerVolumeSpecName "kube-api-access-txw4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:48:48 crc kubenswrapper[4789]: I1124 11:48:48.926170 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcf1dca4-fb5d-47c3-a0be-3b0c349accf5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bcf1dca4-fb5d-47c3-a0be-3b0c349accf5" (UID: "bcf1dca4-fb5d-47c3-a0be-3b0c349accf5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:48:48 crc kubenswrapper[4789]: I1124 11:48:48.943863 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcf1dca4-fb5d-47c3-a0be-3b0c349accf5-config-data" (OuterVolumeSpecName: "config-data") pod "bcf1dca4-fb5d-47c3-a0be-3b0c349accf5" (UID: "bcf1dca4-fb5d-47c3-a0be-3b0c349accf5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:48:48 crc kubenswrapper[4789]: I1124 11:48:48.995370 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txw4x\" (UniqueName: \"kubernetes.io/projected/bcf1dca4-fb5d-47c3-a0be-3b0c349accf5-kube-api-access-txw4x\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:48 crc kubenswrapper[4789]: I1124 11:48:48.995400 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcf1dca4-fb5d-47c3-a0be-3b0c349accf5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:48 crc kubenswrapper[4789]: I1124 11:48:48.995412 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcf1dca4-fb5d-47c3-a0be-3b0c349accf5-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:49 crc kubenswrapper[4789]: I1124 11:48:49.676075 4789 generic.go:334] "Generic (PLEG): container finished" podID="0d771d30-09b4-484e-8421-cc33d10bc26a" containerID="627e6bb55411e8ef976d0e3e1a93afb24b5ea845a93f488da90a7c217d3ae43c" exitCode=0 Nov 24 11:48:49 crc kubenswrapper[4789]: I1124 11:48:49.676514 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0d771d30-09b4-484e-8421-cc33d10bc26a","Type":"ContainerDied","Data":"627e6bb55411e8ef976d0e3e1a93afb24b5ea845a93f488da90a7c217d3ae43c"} Nov 24 11:48:49 crc kubenswrapper[4789]: I1124 11:48:49.680416 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 11:48:49 crc kubenswrapper[4789]: I1124 11:48:49.686332 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bcf1dca4-fb5d-47c3-a0be-3b0c349accf5","Type":"ContainerDied","Data":"d3dd9e63fa4adafc168dd1035c77238c1245120fc4bed161d45b8fc395a4d547"} Nov 24 11:48:49 crc kubenswrapper[4789]: I1124 11:48:49.686400 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 11:48:49 crc kubenswrapper[4789]: I1124 11:48:49.686409 4789 scope.go:117] "RemoveContainer" containerID="a9d5b134385432ce36c34376bcf40d9a193bb97c200f38d56d81ce58a44ddc4c" Nov 24 11:48:49 crc kubenswrapper[4789]: I1124 11:48:49.690279 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"51167964-7234-4713-aef7-4f75548e9040","Type":"ContainerStarted","Data":"788e3a8444166466b31a721724afaa53eccf5d02090374d3f60b50edc54145b5"} Nov 24 11:48:49 crc kubenswrapper[4789]: I1124 11:48:49.690768 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 24 11:48:49 crc kubenswrapper[4789]: I1124 11:48:49.752545 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:48:49 crc kubenswrapper[4789]: I1124 11:48:49.760394 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:48:49 crc kubenswrapper[4789]: I1124 11:48:49.764647 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.764627656 podStartE2EDuration="2.764627656s" podCreationTimestamp="2025-11-24 11:48:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:48:49.752743148 +0000 UTC m=+1112.335214527" watchObservedRunningTime="2025-11-24 11:48:49.764627656 +0000 UTC m=+1112.347099035" Nov 24 11:48:49 crc kubenswrapper[4789]: I1124 11:48:49.788863 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:48:49 crc kubenswrapper[4789]: E1124 11:48:49.789288 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcf1dca4-fb5d-47c3-a0be-3b0c349accf5" containerName="nova-scheduler-scheduler" Nov 24 11:48:49 crc kubenswrapper[4789]: I1124 11:48:49.789306 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcf1dca4-fb5d-47c3-a0be-3b0c349accf5" containerName="nova-scheduler-scheduler" Nov 24 11:48:49 crc kubenswrapper[4789]: E1124 11:48:49.789344 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d771d30-09b4-484e-8421-cc33d10bc26a" containerName="nova-api-log" Nov 24 11:48:49 crc kubenswrapper[4789]: I1124 11:48:49.789350 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d771d30-09b4-484e-8421-cc33d10bc26a" containerName="nova-api-log" Nov 24 11:48:49 crc kubenswrapper[4789]: E1124 11:48:49.789362 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d771d30-09b4-484e-8421-cc33d10bc26a" containerName="nova-api-api" Nov 24 11:48:49 crc kubenswrapper[4789]: I1124 11:48:49.789369 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d771d30-09b4-484e-8421-cc33d10bc26a" containerName="nova-api-api" Nov 24 11:48:49 crc kubenswrapper[4789]: I1124 11:48:49.789571 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d771d30-09b4-484e-8421-cc33d10bc26a" containerName="nova-api-log" Nov 24 11:48:49 crc kubenswrapper[4789]: I1124 11:48:49.789590 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcf1dca4-fb5d-47c3-a0be-3b0c349accf5" containerName="nova-scheduler-scheduler" Nov 24 11:48:49 crc kubenswrapper[4789]: I1124 11:48:49.789601 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d771d30-09b4-484e-8421-cc33d10bc26a" containerName="nova-api-api" Nov 24 11:48:49 crc kubenswrapper[4789]: I1124 11:48:49.790218 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 11:48:49 crc kubenswrapper[4789]: I1124 11:48:49.793364 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 11:48:49 crc kubenswrapper[4789]: I1124 11:48:49.797985 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:48:49 crc kubenswrapper[4789]: I1124 11:48:49.843421 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d771d30-09b4-484e-8421-cc33d10bc26a-combined-ca-bundle\") pod \"0d771d30-09b4-484e-8421-cc33d10bc26a\" (UID: \"0d771d30-09b4-484e-8421-cc33d10bc26a\") " Nov 24 11:48:49 crc kubenswrapper[4789]: I1124 11:48:49.843528 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d771d30-09b4-484e-8421-cc33d10bc26a-logs\") pod \"0d771d30-09b4-484e-8421-cc33d10bc26a\" (UID: \"0d771d30-09b4-484e-8421-cc33d10bc26a\") " Nov 24 11:48:49 crc kubenswrapper[4789]: I1124 11:48:49.843631 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kctm\" (UniqueName: \"kubernetes.io/projected/0d771d30-09b4-484e-8421-cc33d10bc26a-kube-api-access-4kctm\") pod \"0d771d30-09b4-484e-8421-cc33d10bc26a\" (UID: \"0d771d30-09b4-484e-8421-cc33d10bc26a\") " Nov 24 11:48:49 crc kubenswrapper[4789]: I1124 11:48:49.843701 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d771d30-09b4-484e-8421-cc33d10bc26a-config-data\") pod \"0d771d30-09b4-484e-8421-cc33d10bc26a\" (UID: \"0d771d30-09b4-484e-8421-cc33d10bc26a\") " Nov 24 11:48:49 crc kubenswrapper[4789]: I1124 11:48:49.847024 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d771d30-09b4-484e-8421-cc33d10bc26a-logs" (OuterVolumeSpecName: "logs") pod "0d771d30-09b4-484e-8421-cc33d10bc26a" (UID: "0d771d30-09b4-484e-8421-cc33d10bc26a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:48:49 crc kubenswrapper[4789]: I1124 11:48:49.851975 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d771d30-09b4-484e-8421-cc33d10bc26a-kube-api-access-4kctm" (OuterVolumeSpecName: "kube-api-access-4kctm") pod "0d771d30-09b4-484e-8421-cc33d10bc26a" (UID: "0d771d30-09b4-484e-8421-cc33d10bc26a"). InnerVolumeSpecName "kube-api-access-4kctm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:48:49 crc kubenswrapper[4789]: I1124 11:48:49.874854 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d771d30-09b4-484e-8421-cc33d10bc26a-config-data" (OuterVolumeSpecName: "config-data") pod "0d771d30-09b4-484e-8421-cc33d10bc26a" (UID: "0d771d30-09b4-484e-8421-cc33d10bc26a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:48:49 crc kubenswrapper[4789]: I1124 11:48:49.877122 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d771d30-09b4-484e-8421-cc33d10bc26a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d771d30-09b4-484e-8421-cc33d10bc26a" (UID: "0d771d30-09b4-484e-8421-cc33d10bc26a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:48:49 crc kubenswrapper[4789]: I1124 11:48:49.945571 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be05943-90fe-4fef-9251-3b8cce1b1d70-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9be05943-90fe-4fef-9251-3b8cce1b1d70\") " pod="openstack/nova-scheduler-0" Nov 24 11:48:49 crc kubenswrapper[4789]: I1124 11:48:49.945645 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhgvq\" (UniqueName: \"kubernetes.io/projected/9be05943-90fe-4fef-9251-3b8cce1b1d70-kube-api-access-mhgvq\") pod \"nova-scheduler-0\" (UID: \"9be05943-90fe-4fef-9251-3b8cce1b1d70\") " pod="openstack/nova-scheduler-0" Nov 24 11:48:49 crc kubenswrapper[4789]: I1124 11:48:49.945753 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be05943-90fe-4fef-9251-3b8cce1b1d70-config-data\") pod \"nova-scheduler-0\" (UID: \"9be05943-90fe-4fef-9251-3b8cce1b1d70\") " pod="openstack/nova-scheduler-0" Nov 24 11:48:49 crc kubenswrapper[4789]: I1124 11:48:49.945833 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d771d30-09b4-484e-8421-cc33d10bc26a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:49 crc kubenswrapper[4789]: I1124 11:48:49.945848 4789 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d771d30-09b4-484e-8421-cc33d10bc26a-logs\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:49 crc kubenswrapper[4789]: I1124 11:48:49.945859 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kctm\" (UniqueName: \"kubernetes.io/projected/0d771d30-09b4-484e-8421-cc33d10bc26a-kube-api-access-4kctm\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:49 crc kubenswrapper[4789]: I1124 11:48:49.945873 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d771d30-09b4-484e-8421-cc33d10bc26a-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:50 crc kubenswrapper[4789]: I1124 11:48:50.046557 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be05943-90fe-4fef-9251-3b8cce1b1d70-config-data\") pod \"nova-scheduler-0\" (UID: \"9be05943-90fe-4fef-9251-3b8cce1b1d70\") " pod="openstack/nova-scheduler-0" Nov 24 11:48:50 crc kubenswrapper[4789]: I1124 11:48:50.046639 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be05943-90fe-4fef-9251-3b8cce1b1d70-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9be05943-90fe-4fef-9251-3b8cce1b1d70\") " pod="openstack/nova-scheduler-0" Nov 24 11:48:50 crc kubenswrapper[4789]: I1124 11:48:50.046675 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhgvq\" (UniqueName: \"kubernetes.io/projected/9be05943-90fe-4fef-9251-3b8cce1b1d70-kube-api-access-mhgvq\") pod \"nova-scheduler-0\" (UID: \"9be05943-90fe-4fef-9251-3b8cce1b1d70\") " pod="openstack/nova-scheduler-0" Nov 24 11:48:50 crc kubenswrapper[4789]: I1124 11:48:50.050330 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be05943-90fe-4fef-9251-3b8cce1b1d70-config-data\") pod \"nova-scheduler-0\" (UID: \"9be05943-90fe-4fef-9251-3b8cce1b1d70\") " pod="openstack/nova-scheduler-0" Nov 24 11:48:50 crc kubenswrapper[4789]: I1124 11:48:50.051088 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be05943-90fe-4fef-9251-3b8cce1b1d70-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9be05943-90fe-4fef-9251-3b8cce1b1d70\") " pod="openstack/nova-scheduler-0" Nov 24 11:48:50 crc kubenswrapper[4789]: I1124 11:48:50.067990 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhgvq\" (UniqueName: \"kubernetes.io/projected/9be05943-90fe-4fef-9251-3b8cce1b1d70-kube-api-access-mhgvq\") pod \"nova-scheduler-0\" (UID: \"9be05943-90fe-4fef-9251-3b8cce1b1d70\") " pod="openstack/nova-scheduler-0" Nov 24 11:48:50 crc kubenswrapper[4789]: I1124 11:48:50.114446 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 11:48:50 crc kubenswrapper[4789]: I1124 11:48:50.162627 4789 patch_prober.go:28] interesting pod/machine-config-daemon-9czvn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:48:50 crc kubenswrapper[4789]: I1124 11:48:50.162674 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:48:50 crc kubenswrapper[4789]: I1124 11:48:50.162711 4789 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" Nov 24 11:48:50 crc kubenswrapper[4789]: I1124 11:48:50.163366 4789 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f3cea7aef07d9136d7cecc4814ad70b6e4b4a4c56940366aabbc6b2f1bc56ebf"} pod="openshift-machine-config-operator/machine-config-daemon-9czvn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 11:48:50 crc kubenswrapper[4789]: I1124 11:48:50.163412 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" containerID="cri-o://f3cea7aef07d9136d7cecc4814ad70b6e4b4a4c56940366aabbc6b2f1bc56ebf" gracePeriod=600 Nov 24 11:48:50 crc kubenswrapper[4789]: I1124 11:48:50.187133 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcf1dca4-fb5d-47c3-a0be-3b0c349accf5" path="/var/lib/kubelet/pods/bcf1dca4-fb5d-47c3-a0be-3b0c349accf5/volumes" Nov 24 11:48:50 crc kubenswrapper[4789]: I1124 11:48:50.616164 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:48:50 crc kubenswrapper[4789]: W1124 11:48:50.623274 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9be05943_90fe_4fef_9251_3b8cce1b1d70.slice/crio-a60550446a4a1769c95d23bdef74b32d52837024a81956ec73fc9afdb9863294 WatchSource:0}: Error finding container a60550446a4a1769c95d23bdef74b32d52837024a81956ec73fc9afdb9863294: Status 404 returned error can't find the container with id a60550446a4a1769c95d23bdef74b32d52837024a81956ec73fc9afdb9863294 Nov 24 11:48:50 crc kubenswrapper[4789]: I1124 11:48:50.701690 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 11:48:50 crc kubenswrapper[4789]: I1124 11:48:50.701930 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0d771d30-09b4-484e-8421-cc33d10bc26a","Type":"ContainerDied","Data":"c25071ced57fd1c28ecb88dcca712a54c3b7e114f99351fa4b2944b7059c40fb"} Nov 24 11:48:50 crc kubenswrapper[4789]: I1124 11:48:50.701994 4789 scope.go:117] "RemoveContainer" containerID="627e6bb55411e8ef976d0e3e1a93afb24b5ea845a93f488da90a7c217d3ae43c" Nov 24 11:48:50 crc kubenswrapper[4789]: I1124 11:48:50.708716 4789 generic.go:334] "Generic (PLEG): container finished" podID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerID="f3cea7aef07d9136d7cecc4814ad70b6e4b4a4c56940366aabbc6b2f1bc56ebf" exitCode=0 Nov 24 11:48:50 crc kubenswrapper[4789]: I1124 11:48:50.708906 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" event={"ID":"30c4a832-f0e4-481b-a474-3ecea86049f6","Type":"ContainerDied","Data":"f3cea7aef07d9136d7cecc4814ad70b6e4b4a4c56940366aabbc6b2f1bc56ebf"} Nov 24 11:48:50 crc kubenswrapper[4789]: I1124 11:48:50.709208 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" event={"ID":"30c4a832-f0e4-481b-a474-3ecea86049f6","Type":"ContainerStarted","Data":"a7f4024a35602eb88a760e42e4dc78156ab6feb43e0ae706700d1e332b76e45c"} Nov 24 11:48:50 crc kubenswrapper[4789]: I1124 11:48:50.712792 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9be05943-90fe-4fef-9251-3b8cce1b1d70","Type":"ContainerStarted","Data":"a60550446a4a1769c95d23bdef74b32d52837024a81956ec73fc9afdb9863294"} Nov 24 11:48:50 crc kubenswrapper[4789]: I1124 11:48:50.736050 4789 scope.go:117] "RemoveContainer" containerID="d92c6a933bc93c96e3db231af4b8ad55c621d619c486534e8417fa737936f1ba" Nov 24 11:48:50 crc kubenswrapper[4789]: I1124 11:48:50.767886 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:48:50 crc kubenswrapper[4789]: I1124 11:48:50.784117 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:48:50 crc kubenswrapper[4789]: I1124 11:48:50.787734 4789 scope.go:117] "RemoveContainer" containerID="4aecda2250b38282b436cf65055990a602ab1ffc6d48744037d9fd3637b96bdb" Nov 24 11:48:50 crc kubenswrapper[4789]: I1124 11:48:50.798523 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 11:48:50 crc kubenswrapper[4789]: I1124 11:48:50.811677 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:48:50 crc kubenswrapper[4789]: I1124 11:48:50.812009 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 11:48:50 crc kubenswrapper[4789]: I1124 11:48:50.819605 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 11:48:50 crc kubenswrapper[4789]: I1124 11:48:50.962285 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b865603a-95b2-43d4-8735-107d5e594b19-config-data\") pod \"nova-api-0\" (UID: \"b865603a-95b2-43d4-8735-107d5e594b19\") " pod="openstack/nova-api-0" Nov 24 11:48:50 crc kubenswrapper[4789]: I1124 11:48:50.962335 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pww7x\" (UniqueName: \"kubernetes.io/projected/b865603a-95b2-43d4-8735-107d5e594b19-kube-api-access-pww7x\") pod \"nova-api-0\" (UID: \"b865603a-95b2-43d4-8735-107d5e594b19\") " pod="openstack/nova-api-0" Nov 24 11:48:50 crc kubenswrapper[4789]: I1124 11:48:50.962402 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b865603a-95b2-43d4-8735-107d5e594b19-logs\") pod \"nova-api-0\" (UID: \"b865603a-95b2-43d4-8735-107d5e594b19\") " pod="openstack/nova-api-0" Nov 24 11:48:50 crc kubenswrapper[4789]: I1124 11:48:50.962475 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b865603a-95b2-43d4-8735-107d5e594b19-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b865603a-95b2-43d4-8735-107d5e594b19\") " pod="openstack/nova-api-0" Nov 24 11:48:51 crc kubenswrapper[4789]: I1124 11:48:51.063991 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b865603a-95b2-43d4-8735-107d5e594b19-logs\") pod \"nova-api-0\" (UID: \"b865603a-95b2-43d4-8735-107d5e594b19\") " pod="openstack/nova-api-0" Nov 24 11:48:51 crc kubenswrapper[4789]: I1124 11:48:51.064182 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b865603a-95b2-43d4-8735-107d5e594b19-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b865603a-95b2-43d4-8735-107d5e594b19\") " pod="openstack/nova-api-0" Nov 24 11:48:51 crc kubenswrapper[4789]: I1124 11:48:51.064291 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b865603a-95b2-43d4-8735-107d5e594b19-config-data\") pod \"nova-api-0\" (UID: \"b865603a-95b2-43d4-8735-107d5e594b19\") " pod="openstack/nova-api-0" Nov 24 11:48:51 crc kubenswrapper[4789]: I1124 11:48:51.064387 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pww7x\" (UniqueName: \"kubernetes.io/projected/b865603a-95b2-43d4-8735-107d5e594b19-kube-api-access-pww7x\") pod \"nova-api-0\" (UID: \"b865603a-95b2-43d4-8735-107d5e594b19\") " pod="openstack/nova-api-0" Nov 24 11:48:51 crc kubenswrapper[4789]: I1124 11:48:51.065721 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b865603a-95b2-43d4-8735-107d5e594b19-logs\") pod \"nova-api-0\" (UID: \"b865603a-95b2-43d4-8735-107d5e594b19\") " pod="openstack/nova-api-0" Nov 24 11:48:51 crc kubenswrapper[4789]: I1124 11:48:51.071664 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b865603a-95b2-43d4-8735-107d5e594b19-config-data\") pod \"nova-api-0\" (UID: \"b865603a-95b2-43d4-8735-107d5e594b19\") " pod="openstack/nova-api-0" Nov 24 11:48:51 crc kubenswrapper[4789]: I1124 11:48:51.078319 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b865603a-95b2-43d4-8735-107d5e594b19-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b865603a-95b2-43d4-8735-107d5e594b19\") " pod="openstack/nova-api-0" Nov 24 11:48:51 crc kubenswrapper[4789]: I1124 11:48:51.088322 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pww7x\" (UniqueName: \"kubernetes.io/projected/b865603a-95b2-43d4-8735-107d5e594b19-kube-api-access-pww7x\") pod \"nova-api-0\" (UID: \"b865603a-95b2-43d4-8735-107d5e594b19\") " pod="openstack/nova-api-0" Nov 24 11:48:51 crc kubenswrapper[4789]: I1124 11:48:51.183321 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 11:48:51 crc kubenswrapper[4789]: I1124 11:48:51.728305 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:48:51 crc kubenswrapper[4789]: I1124 11:48:51.731951 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9be05943-90fe-4fef-9251-3b8cce1b1d70","Type":"ContainerStarted","Data":"9dc519d6c3db0f1cffd78871d47fdc601c22f70e5db7d7070b8e75ee755e5e4a"} Nov 24 11:48:51 crc kubenswrapper[4789]: W1124 11:48:51.749113 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb865603a_95b2_43d4_8735_107d5e594b19.slice/crio-f4cc7c1f3f4ef2b13c8431065350bd9fbffec812ff4815b5c6a8224b094b383a WatchSource:0}: Error finding container f4cc7c1f3f4ef2b13c8431065350bd9fbffec812ff4815b5c6a8224b094b383a: Status 404 returned error can't find the container with id f4cc7c1f3f4ef2b13c8431065350bd9fbffec812ff4815b5c6a8224b094b383a Nov 24 11:48:51 crc kubenswrapper[4789]: I1124 11:48:51.753875 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.753859589 podStartE2EDuration="2.753859589s" podCreationTimestamp="2025-11-24 11:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:48:51.750020686 +0000 UTC m=+1114.332492065" watchObservedRunningTime="2025-11-24 11:48:51.753859589 +0000 UTC m=+1114.336330958" Nov 24 11:48:52 crc kubenswrapper[4789]: I1124 11:48:52.178953 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d771d30-09b4-484e-8421-cc33d10bc26a" path="/var/lib/kubelet/pods/0d771d30-09b4-484e-8421-cc33d10bc26a/volumes" Nov 24 11:48:52 crc kubenswrapper[4789]: I1124 11:48:52.748900 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b865603a-95b2-43d4-8735-107d5e594b19","Type":"ContainerStarted","Data":"5a7da1ddb9d556b07dc1cf787e6824e1dfa1e6093d5cf9bf4a9fb941d3a5f095"} Nov 24 11:48:52 crc kubenswrapper[4789]: I1124 11:48:52.749226 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b865603a-95b2-43d4-8735-107d5e594b19","Type":"ContainerStarted","Data":"cf6aa2f7b6bd29c88e5362cbddfebb86adcd08236e7f18163014685daf05ac4c"} Nov 24 11:48:52 crc kubenswrapper[4789]: I1124 11:48:52.749244 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b865603a-95b2-43d4-8735-107d5e594b19","Type":"ContainerStarted","Data":"f4cc7c1f3f4ef2b13c8431065350bd9fbffec812ff4815b5c6a8224b094b383a"} Nov 24 11:48:52 crc kubenswrapper[4789]: I1124 11:48:52.773843 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.773822893 podStartE2EDuration="2.773822893s" podCreationTimestamp="2025-11-24 11:48:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:48:52.767939761 +0000 UTC m=+1115.350411150" watchObservedRunningTime="2025-11-24 11:48:52.773822893 +0000 UTC m=+1115.356294272" Nov 24 11:48:55 crc kubenswrapper[4789]: I1124 11:48:55.115567 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 11:48:57 crc kubenswrapper[4789]: I1124 11:48:57.809730 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 24 11:48:58 crc kubenswrapper[4789]: I1124 11:48:58.033043 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 24 11:49:00 crc kubenswrapper[4789]: I1124 11:49:00.115130 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 11:49:00 crc kubenswrapper[4789]: I1124 11:49:00.201483 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 11:49:00 crc kubenswrapper[4789]: I1124 11:49:00.537428 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 11:49:00 crc kubenswrapper[4789]: I1124 11:49:00.537687 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="e2c4a6c2-feeb-4afe-bfd8-9c79e65736e1" containerName="kube-state-metrics" containerID="cri-o://9a1d2b3e3f422c34a2e01942cf5675ee421c148d68e52b751d3037eccc50f6c5" gracePeriod=30 Nov 24 11:49:00 crc kubenswrapper[4789]: I1124 11:49:00.853061 4789 generic.go:334] "Generic (PLEG): container finished" podID="e2c4a6c2-feeb-4afe-bfd8-9c79e65736e1" containerID="9a1d2b3e3f422c34a2e01942cf5675ee421c148d68e52b751d3037eccc50f6c5" exitCode=2 Nov 24 11:49:00 crc kubenswrapper[4789]: I1124 11:49:00.854210 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e2c4a6c2-feeb-4afe-bfd8-9c79e65736e1","Type":"ContainerDied","Data":"9a1d2b3e3f422c34a2e01942cf5675ee421c148d68e52b751d3037eccc50f6c5"} Nov 24 11:49:00 crc kubenswrapper[4789]: I1124 11:49:00.882094 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 11:49:01 crc kubenswrapper[4789]: I1124 11:49:01.017718 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 11:49:01 crc kubenswrapper[4789]: I1124 11:49:01.073218 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkwlb\" (UniqueName: \"kubernetes.io/projected/e2c4a6c2-feeb-4afe-bfd8-9c79e65736e1-kube-api-access-dkwlb\") pod \"e2c4a6c2-feeb-4afe-bfd8-9c79e65736e1\" (UID: \"e2c4a6c2-feeb-4afe-bfd8-9c79e65736e1\") " Nov 24 11:49:01 crc kubenswrapper[4789]: I1124 11:49:01.080649 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2c4a6c2-feeb-4afe-bfd8-9c79e65736e1-kube-api-access-dkwlb" (OuterVolumeSpecName: "kube-api-access-dkwlb") pod "e2c4a6c2-feeb-4afe-bfd8-9c79e65736e1" (UID: "e2c4a6c2-feeb-4afe-bfd8-9c79e65736e1"). InnerVolumeSpecName "kube-api-access-dkwlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:49:01 crc kubenswrapper[4789]: I1124 11:49:01.175302 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkwlb\" (UniqueName: \"kubernetes.io/projected/e2c4a6c2-feeb-4afe-bfd8-9c79e65736e1-kube-api-access-dkwlb\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:01 crc kubenswrapper[4789]: I1124 11:49:01.184515 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 11:49:01 crc kubenswrapper[4789]: I1124 11:49:01.184557 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 11:49:01 crc kubenswrapper[4789]: I1124 11:49:01.854099 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:49:01 crc kubenswrapper[4789]: I1124 11:49:01.854726 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f4b50e92-a4f9-47fa-816a-3a1fb96ec247" containerName="ceilometer-central-agent" containerID="cri-o://f67098998f5a9d567dbe4cbc128651b90b5d2d4f398e96701e71bbfcfaa0068c" gracePeriod=30 Nov 24 11:49:01 crc kubenswrapper[4789]: I1124 11:49:01.854976 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f4b50e92-a4f9-47fa-816a-3a1fb96ec247" containerName="sg-core" containerID="cri-o://4b066efcd23513829170ff9fb66fa97f55f242c46ca5b0c92f0c373c979f4a3f" gracePeriod=30 Nov 24 11:49:01 crc kubenswrapper[4789]: I1124 11:49:01.855070 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f4b50e92-a4f9-47fa-816a-3a1fb96ec247" containerName="ceilometer-notification-agent" containerID="cri-o://5c04a93eecd837d9a9e3433338187a510eed03aac51671a494f01fe70a061111" gracePeriod=30 Nov 24 11:49:01 crc kubenswrapper[4789]: I1124 11:49:01.854988 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f4b50e92-a4f9-47fa-816a-3a1fb96ec247" containerName="proxy-httpd" containerID="cri-o://e1dfa4b5aa44972c25dc9122e073a92fa02c8d1c3cce3503b2c6ea7c5bbc5a12" gracePeriod=30 Nov 24 11:49:01 crc kubenswrapper[4789]: I1124 11:49:01.870454 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e2c4a6c2-feeb-4afe-bfd8-9c79e65736e1","Type":"ContainerDied","Data":"94869572205dae659d2f98d4e2b86acae8ea33c319393b41f194be7051a21d21"} Nov 24 11:49:01 crc kubenswrapper[4789]: I1124 11:49:01.870547 4789 scope.go:117] "RemoveContainer" containerID="9a1d2b3e3f422c34a2e01942cf5675ee421c148d68e52b751d3037eccc50f6c5" Nov 24 11:49:01 crc kubenswrapper[4789]: I1124 11:49:01.870494 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 11:49:01 crc kubenswrapper[4789]: I1124 11:49:01.912607 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 11:49:01 crc kubenswrapper[4789]: I1124 11:49:01.921118 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 11:49:01 crc kubenswrapper[4789]: I1124 11:49:01.934498 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 11:49:01 crc kubenswrapper[4789]: E1124 11:49:01.934838 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2c4a6c2-feeb-4afe-bfd8-9c79e65736e1" containerName="kube-state-metrics" Nov 24 11:49:01 crc kubenswrapper[4789]: I1124 11:49:01.934853 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2c4a6c2-feeb-4afe-bfd8-9c79e65736e1" containerName="kube-state-metrics" Nov 24 11:49:01 crc kubenswrapper[4789]: I1124 11:49:01.935011 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2c4a6c2-feeb-4afe-bfd8-9c79e65736e1" containerName="kube-state-metrics" Nov 24 11:49:01 crc kubenswrapper[4789]: I1124 11:49:01.935552 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 11:49:01 crc kubenswrapper[4789]: I1124 11:49:01.940188 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 24 11:49:01 crc kubenswrapper[4789]: I1124 11:49:01.940366 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 24 11:49:01 crc kubenswrapper[4789]: I1124 11:49:01.949907 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 11:49:01 crc kubenswrapper[4789]: I1124 11:49:01.984299 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bfbe7a9-1f95-4bfa-b298-71ce199ba20c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"8bfbe7a9-1f95-4bfa-b298-71ce199ba20c\") " pod="openstack/kube-state-metrics-0" Nov 24 11:49:01 crc kubenswrapper[4789]: I1124 11:49:01.984423 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5552k\" (UniqueName: \"kubernetes.io/projected/8bfbe7a9-1f95-4bfa-b298-71ce199ba20c-kube-api-access-5552k\") pod \"kube-state-metrics-0\" (UID: \"8bfbe7a9-1f95-4bfa-b298-71ce199ba20c\") " pod="openstack/kube-state-metrics-0" Nov 24 11:49:01 crc kubenswrapper[4789]: I1124 11:49:01.984448 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bfbe7a9-1f95-4bfa-b298-71ce199ba20c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"8bfbe7a9-1f95-4bfa-b298-71ce199ba20c\") " pod="openstack/kube-state-metrics-0" Nov 24 11:49:01 crc kubenswrapper[4789]: I1124 11:49:01.984477 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/8bfbe7a9-1f95-4bfa-b298-71ce199ba20c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"8bfbe7a9-1f95-4bfa-b298-71ce199ba20c\") " pod="openstack/kube-state-metrics-0" Nov 24 11:49:02 crc kubenswrapper[4789]: I1124 11:49:02.086654 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bfbe7a9-1f95-4bfa-b298-71ce199ba20c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"8bfbe7a9-1f95-4bfa-b298-71ce199ba20c\") " pod="openstack/kube-state-metrics-0" Nov 24 11:49:02 crc kubenswrapper[4789]: I1124 11:49:02.086780 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5552k\" (UniqueName: \"kubernetes.io/projected/8bfbe7a9-1f95-4bfa-b298-71ce199ba20c-kube-api-access-5552k\") pod \"kube-state-metrics-0\" (UID: \"8bfbe7a9-1f95-4bfa-b298-71ce199ba20c\") " pod="openstack/kube-state-metrics-0" Nov 24 11:49:02 crc kubenswrapper[4789]: I1124 11:49:02.086809 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bfbe7a9-1f95-4bfa-b298-71ce199ba20c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"8bfbe7a9-1f95-4bfa-b298-71ce199ba20c\") " pod="openstack/kube-state-metrics-0" Nov 24 11:49:02 crc kubenswrapper[4789]: I1124 11:49:02.086829 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/8bfbe7a9-1f95-4bfa-b298-71ce199ba20c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"8bfbe7a9-1f95-4bfa-b298-71ce199ba20c\") " pod="openstack/kube-state-metrics-0" Nov 24 11:49:02 crc kubenswrapper[4789]: I1124 11:49:02.091898 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/8bfbe7a9-1f95-4bfa-b298-71ce199ba20c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"8bfbe7a9-1f95-4bfa-b298-71ce199ba20c\") " pod="openstack/kube-state-metrics-0" Nov 24 11:49:02 crc kubenswrapper[4789]: I1124 11:49:02.092421 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bfbe7a9-1f95-4bfa-b298-71ce199ba20c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"8bfbe7a9-1f95-4bfa-b298-71ce199ba20c\") " pod="openstack/kube-state-metrics-0" Nov 24 11:49:02 crc kubenswrapper[4789]: I1124 11:49:02.095043 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bfbe7a9-1f95-4bfa-b298-71ce199ba20c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"8bfbe7a9-1f95-4bfa-b298-71ce199ba20c\") " pod="openstack/kube-state-metrics-0" Nov 24 11:49:02 crc kubenswrapper[4789]: I1124 11:49:02.114705 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5552k\" (UniqueName: \"kubernetes.io/projected/8bfbe7a9-1f95-4bfa-b298-71ce199ba20c-kube-api-access-5552k\") pod \"kube-state-metrics-0\" (UID: \"8bfbe7a9-1f95-4bfa-b298-71ce199ba20c\") " pod="openstack/kube-state-metrics-0" Nov 24 11:49:02 crc kubenswrapper[4789]: I1124 11:49:02.179687 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2c4a6c2-feeb-4afe-bfd8-9c79e65736e1" path="/var/lib/kubelet/pods/e2c4a6c2-feeb-4afe-bfd8-9c79e65736e1/volumes" Nov 24 11:49:02 crc kubenswrapper[4789]: I1124 11:49:02.266661 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 11:49:02 crc kubenswrapper[4789]: I1124 11:49:02.268729 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b865603a-95b2-43d4-8735-107d5e594b19" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.177:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 11:49:02 crc kubenswrapper[4789]: I1124 11:49:02.268976 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b865603a-95b2-43d4-8735-107d5e594b19" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.177:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 11:49:02 crc kubenswrapper[4789]: I1124 11:49:02.790030 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 11:49:02 crc kubenswrapper[4789]: I1124 11:49:02.886141 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8bfbe7a9-1f95-4bfa-b298-71ce199ba20c","Type":"ContainerStarted","Data":"3272ecae19e6fe0f2eaa0e405f53915d329b1283ce4ceaf3d7a8e2f9199e9e99"} Nov 24 11:49:02 crc kubenswrapper[4789]: I1124 11:49:02.889696 4789 generic.go:334] "Generic (PLEG): container finished" podID="f4b50e92-a4f9-47fa-816a-3a1fb96ec247" containerID="e1dfa4b5aa44972c25dc9122e073a92fa02c8d1c3cce3503b2c6ea7c5bbc5a12" exitCode=0 Nov 24 11:49:02 crc kubenswrapper[4789]: I1124 11:49:02.889748 4789 generic.go:334] "Generic (PLEG): container finished" podID="f4b50e92-a4f9-47fa-816a-3a1fb96ec247" containerID="4b066efcd23513829170ff9fb66fa97f55f242c46ca5b0c92f0c373c979f4a3f" exitCode=2 Nov 24 11:49:02 crc kubenswrapper[4789]: I1124 11:49:02.889762 4789 generic.go:334] "Generic (PLEG): container finished" podID="f4b50e92-a4f9-47fa-816a-3a1fb96ec247" containerID="f67098998f5a9d567dbe4cbc128651b90b5d2d4f398e96701e71bbfcfaa0068c" exitCode=0 Nov 24 11:49:02 crc kubenswrapper[4789]: I1124 11:49:02.889777 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4b50e92-a4f9-47fa-816a-3a1fb96ec247","Type":"ContainerDied","Data":"e1dfa4b5aa44972c25dc9122e073a92fa02c8d1c3cce3503b2c6ea7c5bbc5a12"} Nov 24 11:49:02 crc kubenswrapper[4789]: I1124 11:49:02.889834 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4b50e92-a4f9-47fa-816a-3a1fb96ec247","Type":"ContainerDied","Data":"4b066efcd23513829170ff9fb66fa97f55f242c46ca5b0c92f0c373c979f4a3f"} Nov 24 11:49:02 crc kubenswrapper[4789]: I1124 11:49:02.889848 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4b50e92-a4f9-47fa-816a-3a1fb96ec247","Type":"ContainerDied","Data":"f67098998f5a9d567dbe4cbc128651b90b5d2d4f398e96701e71bbfcfaa0068c"} Nov 24 11:49:03 crc kubenswrapper[4789]: I1124 11:49:03.729784 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:49:03 crc kubenswrapper[4789]: I1124 11:49:03.900671 4789 generic.go:334] "Generic (PLEG): container finished" podID="f4b50e92-a4f9-47fa-816a-3a1fb96ec247" containerID="5c04a93eecd837d9a9e3433338187a510eed03aac51671a494f01fe70a061111" exitCode=0 Nov 24 11:49:03 crc kubenswrapper[4789]: I1124 11:49:03.900759 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4b50e92-a4f9-47fa-816a-3a1fb96ec247","Type":"ContainerDied","Data":"5c04a93eecd837d9a9e3433338187a510eed03aac51671a494f01fe70a061111"} Nov 24 11:49:03 crc kubenswrapper[4789]: I1124 11:49:03.901138 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4b50e92-a4f9-47fa-816a-3a1fb96ec247","Type":"ContainerDied","Data":"8e85a9a2036fa02ccc1c0d13e022623421f09868474e487eaf60b8fc565fcb02"} Nov 24 11:49:03 crc kubenswrapper[4789]: I1124 11:49:03.900776 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:49:03 crc kubenswrapper[4789]: I1124 11:49:03.901182 4789 scope.go:117] "RemoveContainer" containerID="e1dfa4b5aa44972c25dc9122e073a92fa02c8d1c3cce3503b2c6ea7c5bbc5a12" Nov 24 11:49:03 crc kubenswrapper[4789]: I1124 11:49:03.902705 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8bfbe7a9-1f95-4bfa-b298-71ce199ba20c","Type":"ContainerStarted","Data":"a778f44babd2b71a258ea88edd2406a690a53de00ff31fb08792bd9d9f4b3c23"} Nov 24 11:49:03 crc kubenswrapper[4789]: I1124 11:49:03.903017 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 24 11:49:03 crc kubenswrapper[4789]: I1124 11:49:03.920058 4789 scope.go:117] "RemoveContainer" containerID="4b066efcd23513829170ff9fb66fa97f55f242c46ca5b0c92f0c373c979f4a3f" Nov 24 11:49:03 crc kubenswrapper[4789]: I1124 11:49:03.925615 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f4b50e92-a4f9-47fa-816a-3a1fb96ec247-sg-core-conf-yaml\") pod \"f4b50e92-a4f9-47fa-816a-3a1fb96ec247\" (UID: \"f4b50e92-a4f9-47fa-816a-3a1fb96ec247\") " Nov 24 11:49:03 crc kubenswrapper[4789]: I1124 11:49:03.925740 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4b50e92-a4f9-47fa-816a-3a1fb96ec247-scripts\") pod \"f4b50e92-a4f9-47fa-816a-3a1fb96ec247\" (UID: \"f4b50e92-a4f9-47fa-816a-3a1fb96ec247\") " Nov 24 11:49:03 crc kubenswrapper[4789]: I1124 11:49:03.925765 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4b50e92-a4f9-47fa-816a-3a1fb96ec247-combined-ca-bundle\") pod \"f4b50e92-a4f9-47fa-816a-3a1fb96ec247\" (UID: \"f4b50e92-a4f9-47fa-816a-3a1fb96ec247\") " Nov 24 11:49:03 crc kubenswrapper[4789]: I1124 11:49:03.925801 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4b50e92-a4f9-47fa-816a-3a1fb96ec247-log-httpd\") pod \"f4b50e92-a4f9-47fa-816a-3a1fb96ec247\" (UID: \"f4b50e92-a4f9-47fa-816a-3a1fb96ec247\") " Nov 24 11:49:03 crc kubenswrapper[4789]: I1124 11:49:03.925840 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4b50e92-a4f9-47fa-816a-3a1fb96ec247-config-data\") pod \"f4b50e92-a4f9-47fa-816a-3a1fb96ec247\" (UID: \"f4b50e92-a4f9-47fa-816a-3a1fb96ec247\") " Nov 24 11:49:03 crc kubenswrapper[4789]: I1124 11:49:03.925923 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4b50e92-a4f9-47fa-816a-3a1fb96ec247-run-httpd\") pod \"f4b50e92-a4f9-47fa-816a-3a1fb96ec247\" (UID: \"f4b50e92-a4f9-47fa-816a-3a1fb96ec247\") " Nov 24 11:49:03 crc kubenswrapper[4789]: I1124 11:49:03.925966 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wswf\" (UniqueName: \"kubernetes.io/projected/f4b50e92-a4f9-47fa-816a-3a1fb96ec247-kube-api-access-5wswf\") pod \"f4b50e92-a4f9-47fa-816a-3a1fb96ec247\" (UID: \"f4b50e92-a4f9-47fa-816a-3a1fb96ec247\") " Nov 24 11:49:03 crc kubenswrapper[4789]: I1124 11:49:03.927916 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4b50e92-a4f9-47fa-816a-3a1fb96ec247-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f4b50e92-a4f9-47fa-816a-3a1fb96ec247" (UID: "f4b50e92-a4f9-47fa-816a-3a1fb96ec247"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:49:03 crc kubenswrapper[4789]: I1124 11:49:03.928672 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4b50e92-a4f9-47fa-816a-3a1fb96ec247-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f4b50e92-a4f9-47fa-816a-3a1fb96ec247" (UID: "f4b50e92-a4f9-47fa-816a-3a1fb96ec247"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:49:03 crc kubenswrapper[4789]: I1124 11:49:03.937558 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.544034169 podStartE2EDuration="2.937537081s" podCreationTimestamp="2025-11-24 11:49:01 +0000 UTC" firstStartedPulling="2025-11-24 11:49:02.796126371 +0000 UTC m=+1125.378597760" lastFinishedPulling="2025-11-24 11:49:03.189629293 +0000 UTC m=+1125.772100672" observedRunningTime="2025-11-24 11:49:03.9221899 +0000 UTC m=+1126.504661279" watchObservedRunningTime="2025-11-24 11:49:03.937537081 +0000 UTC m=+1126.520008460" Nov 24 11:49:03 crc kubenswrapper[4789]: I1124 11:49:03.940224 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4b50e92-a4f9-47fa-816a-3a1fb96ec247-scripts" (OuterVolumeSpecName: "scripts") pod "f4b50e92-a4f9-47fa-816a-3a1fb96ec247" (UID: "f4b50e92-a4f9-47fa-816a-3a1fb96ec247"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:49:03 crc kubenswrapper[4789]: I1124 11:49:03.950748 4789 scope.go:117] "RemoveContainer" containerID="5c04a93eecd837d9a9e3433338187a510eed03aac51671a494f01fe70a061111" Nov 24 11:49:03 crc kubenswrapper[4789]: I1124 11:49:03.950778 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4b50e92-a4f9-47fa-816a-3a1fb96ec247-kube-api-access-5wswf" (OuterVolumeSpecName: "kube-api-access-5wswf") pod "f4b50e92-a4f9-47fa-816a-3a1fb96ec247" (UID: "f4b50e92-a4f9-47fa-816a-3a1fb96ec247"). InnerVolumeSpecName "kube-api-access-5wswf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:49:03 crc kubenswrapper[4789]: I1124 11:49:03.979832 4789 scope.go:117] "RemoveContainer" containerID="f67098998f5a9d567dbe4cbc128651b90b5d2d4f398e96701e71bbfcfaa0068c" Nov 24 11:49:03 crc kubenswrapper[4789]: I1124 11:49:03.999941 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4b50e92-a4f9-47fa-816a-3a1fb96ec247-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f4b50e92-a4f9-47fa-816a-3a1fb96ec247" (UID: "f4b50e92-a4f9-47fa-816a-3a1fb96ec247"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.028683 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wswf\" (UniqueName: \"kubernetes.io/projected/f4b50e92-a4f9-47fa-816a-3a1fb96ec247-kube-api-access-5wswf\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.028724 4789 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f4b50e92-a4f9-47fa-816a-3a1fb96ec247-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.028738 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4b50e92-a4f9-47fa-816a-3a1fb96ec247-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.028748 4789 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4b50e92-a4f9-47fa-816a-3a1fb96ec247-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.028758 4789 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4b50e92-a4f9-47fa-816a-3a1fb96ec247-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.036576 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4b50e92-a4f9-47fa-816a-3a1fb96ec247-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4b50e92-a4f9-47fa-816a-3a1fb96ec247" (UID: "f4b50e92-a4f9-47fa-816a-3a1fb96ec247"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.087635 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4b50e92-a4f9-47fa-816a-3a1fb96ec247-config-data" (OuterVolumeSpecName: "config-data") pod "f4b50e92-a4f9-47fa-816a-3a1fb96ec247" (UID: "f4b50e92-a4f9-47fa-816a-3a1fb96ec247"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.130761 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4b50e92-a4f9-47fa-816a-3a1fb96ec247-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.131004 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4b50e92-a4f9-47fa-816a-3a1fb96ec247-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.142326 4789 scope.go:117] "RemoveContainer" containerID="e1dfa4b5aa44972c25dc9122e073a92fa02c8d1c3cce3503b2c6ea7c5bbc5a12" Nov 24 11:49:04 crc kubenswrapper[4789]: E1124 11:49:04.142963 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1dfa4b5aa44972c25dc9122e073a92fa02c8d1c3cce3503b2c6ea7c5bbc5a12\": container with ID starting with e1dfa4b5aa44972c25dc9122e073a92fa02c8d1c3cce3503b2c6ea7c5bbc5a12 not found: ID does not exist" containerID="e1dfa4b5aa44972c25dc9122e073a92fa02c8d1c3cce3503b2c6ea7c5bbc5a12" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.143003 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1dfa4b5aa44972c25dc9122e073a92fa02c8d1c3cce3503b2c6ea7c5bbc5a12"} err="failed to get container status \"e1dfa4b5aa44972c25dc9122e073a92fa02c8d1c3cce3503b2c6ea7c5bbc5a12\": rpc error: code = NotFound desc = could not find container \"e1dfa4b5aa44972c25dc9122e073a92fa02c8d1c3cce3503b2c6ea7c5bbc5a12\": container with ID starting with e1dfa4b5aa44972c25dc9122e073a92fa02c8d1c3cce3503b2c6ea7c5bbc5a12 not found: ID does not exist" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.143049 4789 scope.go:117] "RemoveContainer" containerID="4b066efcd23513829170ff9fb66fa97f55f242c46ca5b0c92f0c373c979f4a3f" Nov 24 11:49:04 crc kubenswrapper[4789]: E1124 11:49:04.143421 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b066efcd23513829170ff9fb66fa97f55f242c46ca5b0c92f0c373c979f4a3f\": container with ID starting with 4b066efcd23513829170ff9fb66fa97f55f242c46ca5b0c92f0c373c979f4a3f not found: ID does not exist" containerID="4b066efcd23513829170ff9fb66fa97f55f242c46ca5b0c92f0c373c979f4a3f" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.143491 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b066efcd23513829170ff9fb66fa97f55f242c46ca5b0c92f0c373c979f4a3f"} err="failed to get container status \"4b066efcd23513829170ff9fb66fa97f55f242c46ca5b0c92f0c373c979f4a3f\": rpc error: code = NotFound desc = could not find container \"4b066efcd23513829170ff9fb66fa97f55f242c46ca5b0c92f0c373c979f4a3f\": container with ID starting with 4b066efcd23513829170ff9fb66fa97f55f242c46ca5b0c92f0c373c979f4a3f not found: ID does not exist" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.143524 4789 scope.go:117] "RemoveContainer" containerID="5c04a93eecd837d9a9e3433338187a510eed03aac51671a494f01fe70a061111" Nov 24 11:49:04 crc kubenswrapper[4789]: E1124 11:49:04.143853 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c04a93eecd837d9a9e3433338187a510eed03aac51671a494f01fe70a061111\": container with ID starting with 5c04a93eecd837d9a9e3433338187a510eed03aac51671a494f01fe70a061111 not found: ID does not exist" containerID="5c04a93eecd837d9a9e3433338187a510eed03aac51671a494f01fe70a061111" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.143892 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c04a93eecd837d9a9e3433338187a510eed03aac51671a494f01fe70a061111"} err="failed to get container status \"5c04a93eecd837d9a9e3433338187a510eed03aac51671a494f01fe70a061111\": rpc error: code = NotFound desc = could not find container \"5c04a93eecd837d9a9e3433338187a510eed03aac51671a494f01fe70a061111\": container with ID starting with 5c04a93eecd837d9a9e3433338187a510eed03aac51671a494f01fe70a061111 not found: ID does not exist" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.143913 4789 scope.go:117] "RemoveContainer" containerID="f67098998f5a9d567dbe4cbc128651b90b5d2d4f398e96701e71bbfcfaa0068c" Nov 24 11:49:04 crc kubenswrapper[4789]: E1124 11:49:04.144189 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f67098998f5a9d567dbe4cbc128651b90b5d2d4f398e96701e71bbfcfaa0068c\": container with ID starting with f67098998f5a9d567dbe4cbc128651b90b5d2d4f398e96701e71bbfcfaa0068c not found: ID does not exist" containerID="f67098998f5a9d567dbe4cbc128651b90b5d2d4f398e96701e71bbfcfaa0068c" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.144226 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f67098998f5a9d567dbe4cbc128651b90b5d2d4f398e96701e71bbfcfaa0068c"} err="failed to get container status \"f67098998f5a9d567dbe4cbc128651b90b5d2d4f398e96701e71bbfcfaa0068c\": rpc error: code = NotFound desc = could not find container \"f67098998f5a9d567dbe4cbc128651b90b5d2d4f398e96701e71bbfcfaa0068c\": container with ID starting with f67098998f5a9d567dbe4cbc128651b90b5d2d4f398e96701e71bbfcfaa0068c not found: ID does not exist" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.222618 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.232036 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.249407 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:49:04 crc kubenswrapper[4789]: E1124 11:49:04.249799 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b50e92-a4f9-47fa-816a-3a1fb96ec247" containerName="ceilometer-central-agent" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.249816 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b50e92-a4f9-47fa-816a-3a1fb96ec247" containerName="ceilometer-central-agent" Nov 24 11:49:04 crc kubenswrapper[4789]: E1124 11:49:04.249838 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b50e92-a4f9-47fa-816a-3a1fb96ec247" containerName="proxy-httpd" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.249843 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b50e92-a4f9-47fa-816a-3a1fb96ec247" containerName="proxy-httpd" Nov 24 11:49:04 crc kubenswrapper[4789]: E1124 11:49:04.249870 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b50e92-a4f9-47fa-816a-3a1fb96ec247" containerName="sg-core" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.249877 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b50e92-a4f9-47fa-816a-3a1fb96ec247" containerName="sg-core" Nov 24 11:49:04 crc kubenswrapper[4789]: E1124 11:49:04.249893 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b50e92-a4f9-47fa-816a-3a1fb96ec247" containerName="ceilometer-notification-agent" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.249898 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b50e92-a4f9-47fa-816a-3a1fb96ec247" containerName="ceilometer-notification-agent" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.250090 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b50e92-a4f9-47fa-816a-3a1fb96ec247" containerName="sg-core" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.250104 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b50e92-a4f9-47fa-816a-3a1fb96ec247" containerName="proxy-httpd" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.250120 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b50e92-a4f9-47fa-816a-3a1fb96ec247" containerName="ceilometer-notification-agent" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.250126 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b50e92-a4f9-47fa-816a-3a1fb96ec247" containerName="ceilometer-central-agent" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.252727 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.262932 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.263101 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.263498 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.286008 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.336385 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/968bf2c8-b168-45f2-87ef-54a0b2564ba9-config-data\") pod \"ceilometer-0\" (UID: \"968bf2c8-b168-45f2-87ef-54a0b2564ba9\") " pod="openstack/ceilometer-0" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.337359 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/968bf2c8-b168-45f2-87ef-54a0b2564ba9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"968bf2c8-b168-45f2-87ef-54a0b2564ba9\") " pod="openstack/ceilometer-0" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.337401 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/968bf2c8-b168-45f2-87ef-54a0b2564ba9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"968bf2c8-b168-45f2-87ef-54a0b2564ba9\") " pod="openstack/ceilometer-0" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.337422 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/968bf2c8-b168-45f2-87ef-54a0b2564ba9-log-httpd\") pod \"ceilometer-0\" (UID: \"968bf2c8-b168-45f2-87ef-54a0b2564ba9\") " pod="openstack/ceilometer-0" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.337504 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/968bf2c8-b168-45f2-87ef-54a0b2564ba9-run-httpd\") pod \"ceilometer-0\" (UID: \"968bf2c8-b168-45f2-87ef-54a0b2564ba9\") " pod="openstack/ceilometer-0" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.337559 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/968bf2c8-b168-45f2-87ef-54a0b2564ba9-scripts\") pod \"ceilometer-0\" (UID: \"968bf2c8-b168-45f2-87ef-54a0b2564ba9\") " pod="openstack/ceilometer-0" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.337606 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/968bf2c8-b168-45f2-87ef-54a0b2564ba9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"968bf2c8-b168-45f2-87ef-54a0b2564ba9\") " pod="openstack/ceilometer-0" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.337627 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxgwr\" (UniqueName: \"kubernetes.io/projected/968bf2c8-b168-45f2-87ef-54a0b2564ba9-kube-api-access-pxgwr\") pod \"ceilometer-0\" (UID: \"968bf2c8-b168-45f2-87ef-54a0b2564ba9\") " pod="openstack/ceilometer-0" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.439377 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/968bf2c8-b168-45f2-87ef-54a0b2564ba9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"968bf2c8-b168-45f2-87ef-54a0b2564ba9\") " pod="openstack/ceilometer-0" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.439448 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/968bf2c8-b168-45f2-87ef-54a0b2564ba9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"968bf2c8-b168-45f2-87ef-54a0b2564ba9\") " pod="openstack/ceilometer-0" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.439504 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/968bf2c8-b168-45f2-87ef-54a0b2564ba9-log-httpd\") pod \"ceilometer-0\" (UID: \"968bf2c8-b168-45f2-87ef-54a0b2564ba9\") " pod="openstack/ceilometer-0" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.439617 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/968bf2c8-b168-45f2-87ef-54a0b2564ba9-run-httpd\") pod \"ceilometer-0\" (UID: \"968bf2c8-b168-45f2-87ef-54a0b2564ba9\") " pod="openstack/ceilometer-0" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.439699 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/968bf2c8-b168-45f2-87ef-54a0b2564ba9-scripts\") pod \"ceilometer-0\" (UID: \"968bf2c8-b168-45f2-87ef-54a0b2564ba9\") " pod="openstack/ceilometer-0" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.439835 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/968bf2c8-b168-45f2-87ef-54a0b2564ba9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"968bf2c8-b168-45f2-87ef-54a0b2564ba9\") " pod="openstack/ceilometer-0" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.439887 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxgwr\" (UniqueName: \"kubernetes.io/projected/968bf2c8-b168-45f2-87ef-54a0b2564ba9-kube-api-access-pxgwr\") pod \"ceilometer-0\" (UID: \"968bf2c8-b168-45f2-87ef-54a0b2564ba9\") " pod="openstack/ceilometer-0" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.439933 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/968bf2c8-b168-45f2-87ef-54a0b2564ba9-config-data\") pod \"ceilometer-0\" (UID: \"968bf2c8-b168-45f2-87ef-54a0b2564ba9\") " pod="openstack/ceilometer-0" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.440119 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/968bf2c8-b168-45f2-87ef-54a0b2564ba9-log-httpd\") pod \"ceilometer-0\" (UID: \"968bf2c8-b168-45f2-87ef-54a0b2564ba9\") " pod="openstack/ceilometer-0" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.440286 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/968bf2c8-b168-45f2-87ef-54a0b2564ba9-run-httpd\") pod \"ceilometer-0\" (UID: \"968bf2c8-b168-45f2-87ef-54a0b2564ba9\") " pod="openstack/ceilometer-0" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.442522 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/968bf2c8-b168-45f2-87ef-54a0b2564ba9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"968bf2c8-b168-45f2-87ef-54a0b2564ba9\") " pod="openstack/ceilometer-0" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.443248 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/968bf2c8-b168-45f2-87ef-54a0b2564ba9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"968bf2c8-b168-45f2-87ef-54a0b2564ba9\") " pod="openstack/ceilometer-0" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.445261 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/968bf2c8-b168-45f2-87ef-54a0b2564ba9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"968bf2c8-b168-45f2-87ef-54a0b2564ba9\") " pod="openstack/ceilometer-0" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.446867 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/968bf2c8-b168-45f2-87ef-54a0b2564ba9-scripts\") pod \"ceilometer-0\" (UID: \"968bf2c8-b168-45f2-87ef-54a0b2564ba9\") " pod="openstack/ceilometer-0" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.456852 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/968bf2c8-b168-45f2-87ef-54a0b2564ba9-config-data\") pod \"ceilometer-0\" (UID: \"968bf2c8-b168-45f2-87ef-54a0b2564ba9\") " pod="openstack/ceilometer-0" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.459137 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxgwr\" (UniqueName: \"kubernetes.io/projected/968bf2c8-b168-45f2-87ef-54a0b2564ba9-kube-api-access-pxgwr\") pod \"ceilometer-0\" (UID: \"968bf2c8-b168-45f2-87ef-54a0b2564ba9\") " pod="openstack/ceilometer-0" Nov 24 11:49:04 crc kubenswrapper[4789]: I1124 11:49:04.573423 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:49:05 crc kubenswrapper[4789]: I1124 11:49:05.100891 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:49:05 crc kubenswrapper[4789]: W1124 11:49:05.106389 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod968bf2c8_b168_45f2_87ef_54a0b2564ba9.slice/crio-708d997ad0016292ee5dce6e0a1b10313024745f3946fcfa263f268d7f72e2cc WatchSource:0}: Error finding container 708d997ad0016292ee5dce6e0a1b10313024745f3946fcfa263f268d7f72e2cc: Status 404 returned error can't find the container with id 708d997ad0016292ee5dce6e0a1b10313024745f3946fcfa263f268d7f72e2cc Nov 24 11:49:05 crc kubenswrapper[4789]: I1124 11:49:05.933891 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"968bf2c8-b168-45f2-87ef-54a0b2564ba9","Type":"ContainerStarted","Data":"9952208fe0c1f2807d5bf77cd17c3a1fedcf1324934092621005c60b7e9e8716"} Nov 24 11:49:05 crc kubenswrapper[4789]: I1124 11:49:05.933943 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"968bf2c8-b168-45f2-87ef-54a0b2564ba9","Type":"ContainerStarted","Data":"708d997ad0016292ee5dce6e0a1b10313024745f3946fcfa263f268d7f72e2cc"} Nov 24 11:49:06 crc kubenswrapper[4789]: I1124 11:49:06.182909 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b50e92-a4f9-47fa-816a-3a1fb96ec247" path="/var/lib/kubelet/pods/f4b50e92-a4f9-47fa-816a-3a1fb96ec247/volumes" Nov 24 11:49:06 crc kubenswrapper[4789]: I1124 11:49:06.944718 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"968bf2c8-b168-45f2-87ef-54a0b2564ba9","Type":"ContainerStarted","Data":"ec59c599637614868bf7569d77e99b64af521f88112bbfa62a365b0e456aee68"} Nov 24 11:49:07 crc kubenswrapper[4789]: I1124 11:49:07.823965 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 11:49:07 crc kubenswrapper[4789]: I1124 11:49:07.955814 4789 generic.go:334] "Generic (PLEG): container finished" podID="6ef33760-b229-42f2-9197-57ff1a2d8d3b" containerID="4f0b64c953552ddf85371c735b109fa6d0c1abfef8cd839a5be7d7727cac7190" exitCode=137 Nov 24 11:49:07 crc kubenswrapper[4789]: I1124 11:49:07.955845 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:49:07 crc kubenswrapper[4789]: I1124 11:49:07.955897 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 11:49:07 crc kubenswrapper[4789]: I1124 11:49:07.955894 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6ef33760-b229-42f2-9197-57ff1a2d8d3b","Type":"ContainerDied","Data":"4f0b64c953552ddf85371c735b109fa6d0c1abfef8cd839a5be7d7727cac7190"} Nov 24 11:49:07 crc kubenswrapper[4789]: I1124 11:49:07.956180 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6ef33760-b229-42f2-9197-57ff1a2d8d3b","Type":"ContainerDied","Data":"38e325489592fe113355994c3ce78accdbe42f43fdc9bdf46bcc6bcc253ca229"} Nov 24 11:49:07 crc kubenswrapper[4789]: I1124 11:49:07.956198 4789 scope.go:117] "RemoveContainer" containerID="4f0b64c953552ddf85371c735b109fa6d0c1abfef8cd839a5be7d7727cac7190" Nov 24 11:49:07 crc kubenswrapper[4789]: I1124 11:49:07.960200 4789 generic.go:334] "Generic (PLEG): container finished" podID="f527a2d4-6a1e-4c79-9437-a216f724aa62" containerID="0b2ab1943ef9ea8947b3f00c9cf370b38638585d7a847dab99d7251922b4d1f4" exitCode=137 Nov 24 11:49:07 crc kubenswrapper[4789]: I1124 11:49:07.960375 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f527a2d4-6a1e-4c79-9437-a216f724aa62","Type":"ContainerDied","Data":"0b2ab1943ef9ea8947b3f00c9cf370b38638585d7a847dab99d7251922b4d1f4"} Nov 24 11:49:07 crc kubenswrapper[4789]: I1124 11:49:07.960407 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f527a2d4-6a1e-4c79-9437-a216f724aa62","Type":"ContainerDied","Data":"7d6bd70e7fadc5bd0c02508d6b80b7d6056b1d1da5934c853a490c37982143c8"} Nov 24 11:49:07 crc kubenswrapper[4789]: I1124 11:49:07.973115 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"968bf2c8-b168-45f2-87ef-54a0b2564ba9","Type":"ContainerStarted","Data":"c949dd8e76491d1fe693fe8aea21f95942fc67dd3627eb9cec437afae648290d"} Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.003719 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ef33760-b229-42f2-9197-57ff1a2d8d3b-logs\") pod \"6ef33760-b229-42f2-9197-57ff1a2d8d3b\" (UID: \"6ef33760-b229-42f2-9197-57ff1a2d8d3b\") " Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.004008 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ef33760-b229-42f2-9197-57ff1a2d8d3b-config-data\") pod \"6ef33760-b229-42f2-9197-57ff1a2d8d3b\" (UID: \"6ef33760-b229-42f2-9197-57ff1a2d8d3b\") " Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.004159 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ef33760-b229-42f2-9197-57ff1a2d8d3b-combined-ca-bundle\") pod \"6ef33760-b229-42f2-9197-57ff1a2d8d3b\" (UID: \"6ef33760-b229-42f2-9197-57ff1a2d8d3b\") " Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.004324 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mj9hh\" (UniqueName: \"kubernetes.io/projected/6ef33760-b229-42f2-9197-57ff1a2d8d3b-kube-api-access-mj9hh\") pod \"6ef33760-b229-42f2-9197-57ff1a2d8d3b\" (UID: \"6ef33760-b229-42f2-9197-57ff1a2d8d3b\") " Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.009655 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ef33760-b229-42f2-9197-57ff1a2d8d3b-logs" (OuterVolumeSpecName: "logs") pod "6ef33760-b229-42f2-9197-57ff1a2d8d3b" (UID: "6ef33760-b229-42f2-9197-57ff1a2d8d3b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.016793 4789 scope.go:117] "RemoveContainer" containerID="652d0bbeeeede08cb76864fa96c5a9ead1089170e6c7ac445c4157b881e70a23" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.021610 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ef33760-b229-42f2-9197-57ff1a2d8d3b-kube-api-access-mj9hh" (OuterVolumeSpecName: "kube-api-access-mj9hh") pod "6ef33760-b229-42f2-9197-57ff1a2d8d3b" (UID: "6ef33760-b229-42f2-9197-57ff1a2d8d3b"). InnerVolumeSpecName "kube-api-access-mj9hh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.044065 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ef33760-b229-42f2-9197-57ff1a2d8d3b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ef33760-b229-42f2-9197-57ff1a2d8d3b" (UID: "6ef33760-b229-42f2-9197-57ff1a2d8d3b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.056125 4789 scope.go:117] "RemoveContainer" containerID="4f0b64c953552ddf85371c735b109fa6d0c1abfef8cd839a5be7d7727cac7190" Nov 24 11:49:08 crc kubenswrapper[4789]: E1124 11:49:08.056594 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f0b64c953552ddf85371c735b109fa6d0c1abfef8cd839a5be7d7727cac7190\": container with ID starting with 4f0b64c953552ddf85371c735b109fa6d0c1abfef8cd839a5be7d7727cac7190 not found: ID does not exist" containerID="4f0b64c953552ddf85371c735b109fa6d0c1abfef8cd839a5be7d7727cac7190" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.056644 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f0b64c953552ddf85371c735b109fa6d0c1abfef8cd839a5be7d7727cac7190"} err="failed to get container status \"4f0b64c953552ddf85371c735b109fa6d0c1abfef8cd839a5be7d7727cac7190\": rpc error: code = NotFound desc = could not find container \"4f0b64c953552ddf85371c735b109fa6d0c1abfef8cd839a5be7d7727cac7190\": container with ID starting with 4f0b64c953552ddf85371c735b109fa6d0c1abfef8cd839a5be7d7727cac7190 not found: ID does not exist" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.056665 4789 scope.go:117] "RemoveContainer" containerID="652d0bbeeeede08cb76864fa96c5a9ead1089170e6c7ac445c4157b881e70a23" Nov 24 11:49:08 crc kubenswrapper[4789]: E1124 11:49:08.058548 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"652d0bbeeeede08cb76864fa96c5a9ead1089170e6c7ac445c4157b881e70a23\": container with ID starting with 652d0bbeeeede08cb76864fa96c5a9ead1089170e6c7ac445c4157b881e70a23 not found: ID does not exist" containerID="652d0bbeeeede08cb76864fa96c5a9ead1089170e6c7ac445c4157b881e70a23" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.058573 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"652d0bbeeeede08cb76864fa96c5a9ead1089170e6c7ac445c4157b881e70a23"} err="failed to get container status \"652d0bbeeeede08cb76864fa96c5a9ead1089170e6c7ac445c4157b881e70a23\": rpc error: code = NotFound desc = could not find container \"652d0bbeeeede08cb76864fa96c5a9ead1089170e6c7ac445c4157b881e70a23\": container with ID starting with 652d0bbeeeede08cb76864fa96c5a9ead1089170e6c7ac445c4157b881e70a23 not found: ID does not exist" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.058586 4789 scope.go:117] "RemoveContainer" containerID="0b2ab1943ef9ea8947b3f00c9cf370b38638585d7a847dab99d7251922b4d1f4" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.060951 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ef33760-b229-42f2-9197-57ff1a2d8d3b-config-data" (OuterVolumeSpecName: "config-data") pod "6ef33760-b229-42f2-9197-57ff1a2d8d3b" (UID: "6ef33760-b229-42f2-9197-57ff1a2d8d3b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.077707 4789 scope.go:117] "RemoveContainer" containerID="0b2ab1943ef9ea8947b3f00c9cf370b38638585d7a847dab99d7251922b4d1f4" Nov 24 11:49:08 crc kubenswrapper[4789]: E1124 11:49:08.078842 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b2ab1943ef9ea8947b3f00c9cf370b38638585d7a847dab99d7251922b4d1f4\": container with ID starting with 0b2ab1943ef9ea8947b3f00c9cf370b38638585d7a847dab99d7251922b4d1f4 not found: ID does not exist" containerID="0b2ab1943ef9ea8947b3f00c9cf370b38638585d7a847dab99d7251922b4d1f4" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.078874 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b2ab1943ef9ea8947b3f00c9cf370b38638585d7a847dab99d7251922b4d1f4"} err="failed to get container status \"0b2ab1943ef9ea8947b3f00c9cf370b38638585d7a847dab99d7251922b4d1f4\": rpc error: code = NotFound desc = could not find container \"0b2ab1943ef9ea8947b3f00c9cf370b38638585d7a847dab99d7251922b4d1f4\": container with ID starting with 0b2ab1943ef9ea8947b3f00c9cf370b38638585d7a847dab99d7251922b4d1f4 not found: ID does not exist" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.106351 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f527a2d4-6a1e-4c79-9437-a216f724aa62-config-data\") pod \"f527a2d4-6a1e-4c79-9437-a216f724aa62\" (UID: \"f527a2d4-6a1e-4c79-9437-a216f724aa62\") " Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.106477 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zh2f7\" (UniqueName: \"kubernetes.io/projected/f527a2d4-6a1e-4c79-9437-a216f724aa62-kube-api-access-zh2f7\") pod \"f527a2d4-6a1e-4c79-9437-a216f724aa62\" (UID: \"f527a2d4-6a1e-4c79-9437-a216f724aa62\") " Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.106546 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f527a2d4-6a1e-4c79-9437-a216f724aa62-combined-ca-bundle\") pod \"f527a2d4-6a1e-4c79-9437-a216f724aa62\" (UID: \"f527a2d4-6a1e-4c79-9437-a216f724aa62\") " Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.106991 4789 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ef33760-b229-42f2-9197-57ff1a2d8d3b-logs\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.107015 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ef33760-b229-42f2-9197-57ff1a2d8d3b-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.107028 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ef33760-b229-42f2-9197-57ff1a2d8d3b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.107042 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mj9hh\" (UniqueName: \"kubernetes.io/projected/6ef33760-b229-42f2-9197-57ff1a2d8d3b-kube-api-access-mj9hh\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.110196 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f527a2d4-6a1e-4c79-9437-a216f724aa62-kube-api-access-zh2f7" (OuterVolumeSpecName: "kube-api-access-zh2f7") pod "f527a2d4-6a1e-4c79-9437-a216f724aa62" (UID: "f527a2d4-6a1e-4c79-9437-a216f724aa62"). InnerVolumeSpecName "kube-api-access-zh2f7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.134814 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f527a2d4-6a1e-4c79-9437-a216f724aa62-config-data" (OuterVolumeSpecName: "config-data") pod "f527a2d4-6a1e-4c79-9437-a216f724aa62" (UID: "f527a2d4-6a1e-4c79-9437-a216f724aa62"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.164426 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f527a2d4-6a1e-4c79-9437-a216f724aa62-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f527a2d4-6a1e-4c79-9437-a216f724aa62" (UID: "f527a2d4-6a1e-4c79-9437-a216f724aa62"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.209180 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zh2f7\" (UniqueName: \"kubernetes.io/projected/f527a2d4-6a1e-4c79-9437-a216f724aa62-kube-api-access-zh2f7\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.209208 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f527a2d4-6a1e-4c79-9437-a216f724aa62-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.209217 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f527a2d4-6a1e-4c79-9437-a216f724aa62-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.309178 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.318183 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.328939 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:49:08 crc kubenswrapper[4789]: E1124 11:49:08.329286 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ef33760-b229-42f2-9197-57ff1a2d8d3b" containerName="nova-metadata-log" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.329302 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ef33760-b229-42f2-9197-57ff1a2d8d3b" containerName="nova-metadata-log" Nov 24 11:49:08 crc kubenswrapper[4789]: E1124 11:49:08.329323 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ef33760-b229-42f2-9197-57ff1a2d8d3b" containerName="nova-metadata-metadata" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.329329 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ef33760-b229-42f2-9197-57ff1a2d8d3b" containerName="nova-metadata-metadata" Nov 24 11:49:08 crc kubenswrapper[4789]: E1124 11:49:08.329352 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f527a2d4-6a1e-4c79-9437-a216f724aa62" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.329360 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="f527a2d4-6a1e-4c79-9437-a216f724aa62" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.329536 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ef33760-b229-42f2-9197-57ff1a2d8d3b" containerName="nova-metadata-metadata" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.329548 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ef33760-b229-42f2-9197-57ff1a2d8d3b" containerName="nova-metadata-log" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.329570 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="f527a2d4-6a1e-4c79-9437-a216f724aa62" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.330403 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.339747 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.349869 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.355659 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.513486 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c375501-c3aa-4a6e-b0bc-9991f2d56b37-config-data\") pod \"nova-metadata-0\" (UID: \"9c375501-c3aa-4a6e-b0bc-9991f2d56b37\") " pod="openstack/nova-metadata-0" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.513590 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c375501-c3aa-4a6e-b0bc-9991f2d56b37-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9c375501-c3aa-4a6e-b0bc-9991f2d56b37\") " pod="openstack/nova-metadata-0" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.513641 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c375501-c3aa-4a6e-b0bc-9991f2d56b37-logs\") pod \"nova-metadata-0\" (UID: \"9c375501-c3aa-4a6e-b0bc-9991f2d56b37\") " pod="openstack/nova-metadata-0" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.513704 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c375501-c3aa-4a6e-b0bc-9991f2d56b37-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9c375501-c3aa-4a6e-b0bc-9991f2d56b37\") " pod="openstack/nova-metadata-0" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.513735 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twlvq\" (UniqueName: \"kubernetes.io/projected/9c375501-c3aa-4a6e-b0bc-9991f2d56b37-kube-api-access-twlvq\") pod \"nova-metadata-0\" (UID: \"9c375501-c3aa-4a6e-b0bc-9991f2d56b37\") " pod="openstack/nova-metadata-0" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.615255 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c375501-c3aa-4a6e-b0bc-9991f2d56b37-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9c375501-c3aa-4a6e-b0bc-9991f2d56b37\") " pod="openstack/nova-metadata-0" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.615315 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twlvq\" (UniqueName: \"kubernetes.io/projected/9c375501-c3aa-4a6e-b0bc-9991f2d56b37-kube-api-access-twlvq\") pod \"nova-metadata-0\" (UID: \"9c375501-c3aa-4a6e-b0bc-9991f2d56b37\") " pod="openstack/nova-metadata-0" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.615427 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c375501-c3aa-4a6e-b0bc-9991f2d56b37-config-data\") pod \"nova-metadata-0\" (UID: \"9c375501-c3aa-4a6e-b0bc-9991f2d56b37\") " pod="openstack/nova-metadata-0" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.615512 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c375501-c3aa-4a6e-b0bc-9991f2d56b37-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9c375501-c3aa-4a6e-b0bc-9991f2d56b37\") " pod="openstack/nova-metadata-0" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.615553 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c375501-c3aa-4a6e-b0bc-9991f2d56b37-logs\") pod \"nova-metadata-0\" (UID: \"9c375501-c3aa-4a6e-b0bc-9991f2d56b37\") " pod="openstack/nova-metadata-0" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.616047 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c375501-c3aa-4a6e-b0bc-9991f2d56b37-logs\") pod \"nova-metadata-0\" (UID: \"9c375501-c3aa-4a6e-b0bc-9991f2d56b37\") " pod="openstack/nova-metadata-0" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.621068 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c375501-c3aa-4a6e-b0bc-9991f2d56b37-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9c375501-c3aa-4a6e-b0bc-9991f2d56b37\") " pod="openstack/nova-metadata-0" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.621068 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c375501-c3aa-4a6e-b0bc-9991f2d56b37-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9c375501-c3aa-4a6e-b0bc-9991f2d56b37\") " pod="openstack/nova-metadata-0" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.621694 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c375501-c3aa-4a6e-b0bc-9991f2d56b37-config-data\") pod \"nova-metadata-0\" (UID: \"9c375501-c3aa-4a6e-b0bc-9991f2d56b37\") " pod="openstack/nova-metadata-0" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.639130 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twlvq\" (UniqueName: \"kubernetes.io/projected/9c375501-c3aa-4a6e-b0bc-9991f2d56b37-kube-api-access-twlvq\") pod \"nova-metadata-0\" (UID: \"9c375501-c3aa-4a6e-b0bc-9991f2d56b37\") " pod="openstack/nova-metadata-0" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.670338 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.986524 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.994760 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"968bf2c8-b168-45f2-87ef-54a0b2564ba9","Type":"ContainerStarted","Data":"3c6cb512d6bf4619b539605606b271c410ba231f1d56f845a7fb30a92bc3683c"} Nov 24 11:49:08 crc kubenswrapper[4789]: I1124 11:49:08.994900 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 11:49:09 crc kubenswrapper[4789]: I1124 11:49:09.033761 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 11:49:09 crc kubenswrapper[4789]: I1124 11:49:09.043189 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 11:49:09 crc kubenswrapper[4789]: I1124 11:49:09.051646 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 11:49:09 crc kubenswrapper[4789]: I1124 11:49:09.052584 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.104987001 podStartE2EDuration="5.052555819s" podCreationTimestamp="2025-11-24 11:49:04 +0000 UTC" firstStartedPulling="2025-11-24 11:49:05.108756012 +0000 UTC m=+1127.691227391" lastFinishedPulling="2025-11-24 11:49:08.05632483 +0000 UTC m=+1130.638796209" observedRunningTime="2025-11-24 11:49:09.028362114 +0000 UTC m=+1131.610833513" watchObservedRunningTime="2025-11-24 11:49:09.052555819 +0000 UTC m=+1131.635027198" Nov 24 11:49:09 crc kubenswrapper[4789]: I1124 11:49:09.052838 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:49:09 crc kubenswrapper[4789]: I1124 11:49:09.071018 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 11:49:09 crc kubenswrapper[4789]: I1124 11:49:09.088297 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 24 11:49:09 crc kubenswrapper[4789]: I1124 11:49:09.088584 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 24 11:49:09 crc kubenswrapper[4789]: I1124 11:49:09.088805 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 24 11:49:09 crc kubenswrapper[4789]: I1124 11:49:09.138429 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:49:09 crc kubenswrapper[4789]: I1124 11:49:09.234987 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a12e5d7-5339-4a7b-a9d1-0355b3b2fd62-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6a12e5d7-5339-4a7b-a9d1-0355b3b2fd62\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:49:09 crc kubenswrapper[4789]: I1124 11:49:09.235053 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a12e5d7-5339-4a7b-a9d1-0355b3b2fd62-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6a12e5d7-5339-4a7b-a9d1-0355b3b2fd62\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:49:09 crc kubenswrapper[4789]: I1124 11:49:09.235077 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt4tt\" (UniqueName: \"kubernetes.io/projected/6a12e5d7-5339-4a7b-a9d1-0355b3b2fd62-kube-api-access-zt4tt\") pod \"nova-cell1-novncproxy-0\" (UID: \"6a12e5d7-5339-4a7b-a9d1-0355b3b2fd62\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:49:09 crc kubenswrapper[4789]: I1124 11:49:09.235131 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a12e5d7-5339-4a7b-a9d1-0355b3b2fd62-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6a12e5d7-5339-4a7b-a9d1-0355b3b2fd62\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:49:09 crc kubenswrapper[4789]: I1124 11:49:09.235155 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a12e5d7-5339-4a7b-a9d1-0355b3b2fd62-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6a12e5d7-5339-4a7b-a9d1-0355b3b2fd62\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:49:09 crc kubenswrapper[4789]: I1124 11:49:09.337015 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a12e5d7-5339-4a7b-a9d1-0355b3b2fd62-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6a12e5d7-5339-4a7b-a9d1-0355b3b2fd62\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:49:09 crc kubenswrapper[4789]: I1124 11:49:09.337133 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a12e5d7-5339-4a7b-a9d1-0355b3b2fd62-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6a12e5d7-5339-4a7b-a9d1-0355b3b2fd62\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:49:09 crc kubenswrapper[4789]: I1124 11:49:09.337189 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt4tt\" (UniqueName: \"kubernetes.io/projected/6a12e5d7-5339-4a7b-a9d1-0355b3b2fd62-kube-api-access-zt4tt\") pod \"nova-cell1-novncproxy-0\" (UID: \"6a12e5d7-5339-4a7b-a9d1-0355b3b2fd62\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:49:09 crc kubenswrapper[4789]: I1124 11:49:09.338710 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a12e5d7-5339-4a7b-a9d1-0355b3b2fd62-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6a12e5d7-5339-4a7b-a9d1-0355b3b2fd62\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:49:09 crc kubenswrapper[4789]: I1124 11:49:09.338783 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a12e5d7-5339-4a7b-a9d1-0355b3b2fd62-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6a12e5d7-5339-4a7b-a9d1-0355b3b2fd62\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:49:09 crc kubenswrapper[4789]: I1124 11:49:09.340938 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a12e5d7-5339-4a7b-a9d1-0355b3b2fd62-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6a12e5d7-5339-4a7b-a9d1-0355b3b2fd62\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:49:09 crc kubenswrapper[4789]: I1124 11:49:09.341994 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a12e5d7-5339-4a7b-a9d1-0355b3b2fd62-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6a12e5d7-5339-4a7b-a9d1-0355b3b2fd62\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:49:09 crc kubenswrapper[4789]: I1124 11:49:09.343359 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a12e5d7-5339-4a7b-a9d1-0355b3b2fd62-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6a12e5d7-5339-4a7b-a9d1-0355b3b2fd62\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:49:09 crc kubenswrapper[4789]: I1124 11:49:09.343761 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a12e5d7-5339-4a7b-a9d1-0355b3b2fd62-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6a12e5d7-5339-4a7b-a9d1-0355b3b2fd62\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:49:09 crc kubenswrapper[4789]: I1124 11:49:09.352605 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt4tt\" (UniqueName: \"kubernetes.io/projected/6a12e5d7-5339-4a7b-a9d1-0355b3b2fd62-kube-api-access-zt4tt\") pod \"nova-cell1-novncproxy-0\" (UID: \"6a12e5d7-5339-4a7b-a9d1-0355b3b2fd62\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:49:09 crc kubenswrapper[4789]: I1124 11:49:09.403229 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:49:09 crc kubenswrapper[4789]: W1124 11:49:09.843510 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a12e5d7_5339_4a7b_a9d1_0355b3b2fd62.slice/crio-b9fdb682e4dfc225e47dcf425652c1edba9e5350d4555c17540dc5571441d534 WatchSource:0}: Error finding container b9fdb682e4dfc225e47dcf425652c1edba9e5350d4555c17540dc5571441d534: Status 404 returned error can't find the container with id b9fdb682e4dfc225e47dcf425652c1edba9e5350d4555c17540dc5571441d534 Nov 24 11:49:09 crc kubenswrapper[4789]: I1124 11:49:09.845744 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 11:49:10 crc kubenswrapper[4789]: I1124 11:49:10.014044 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6a12e5d7-5339-4a7b-a9d1-0355b3b2fd62","Type":"ContainerStarted","Data":"b9fdb682e4dfc225e47dcf425652c1edba9e5350d4555c17540dc5571441d534"} Nov 24 11:49:10 crc kubenswrapper[4789]: I1124 11:49:10.017549 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9c375501-c3aa-4a6e-b0bc-9991f2d56b37","Type":"ContainerStarted","Data":"fa3f7d71da62548169a95a9ce2014b7881d8182caa7a3a6f0adc683bd1ffd228"} Nov 24 11:49:10 crc kubenswrapper[4789]: I1124 11:49:10.017600 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9c375501-c3aa-4a6e-b0bc-9991f2d56b37","Type":"ContainerStarted","Data":"322f2fd4eb319c83d67e3ef438925e87b3ce9d1182a18eda865e4dc0e44b474b"} Nov 24 11:49:10 crc kubenswrapper[4789]: I1124 11:49:10.017615 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9c375501-c3aa-4a6e-b0bc-9991f2d56b37","Type":"ContainerStarted","Data":"0106898767ce904c5278c36e8551ddbc9cd854b9815bc6c8c7cb0135a4bc649f"} Nov 24 11:49:10 crc kubenswrapper[4789]: I1124 11:49:10.046465 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.046421272 podStartE2EDuration="2.046421272s" podCreationTimestamp="2025-11-24 11:49:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:49:10.039143147 +0000 UTC m=+1132.621614526" watchObservedRunningTime="2025-11-24 11:49:10.046421272 +0000 UTC m=+1132.628892651" Nov 24 11:49:10 crc kubenswrapper[4789]: I1124 11:49:10.197785 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ef33760-b229-42f2-9197-57ff1a2d8d3b" path="/var/lib/kubelet/pods/6ef33760-b229-42f2-9197-57ff1a2d8d3b/volumes" Nov 24 11:49:10 crc kubenswrapper[4789]: I1124 11:49:10.198662 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f527a2d4-6a1e-4c79-9437-a216f724aa62" path="/var/lib/kubelet/pods/f527a2d4-6a1e-4c79-9437-a216f724aa62/volumes" Nov 24 11:49:11 crc kubenswrapper[4789]: I1124 11:49:11.029040 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6a12e5d7-5339-4a7b-a9d1-0355b3b2fd62","Type":"ContainerStarted","Data":"0e9c401f71edbbf664abe012c4197c2a9980798304ef986e8b07db9fda79f8ac"} Nov 24 11:49:11 crc kubenswrapper[4789]: I1124 11:49:11.049327 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.049299654 podStartE2EDuration="2.049299654s" podCreationTimestamp="2025-11-24 11:49:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:49:11.046188808 +0000 UTC m=+1133.628660197" watchObservedRunningTime="2025-11-24 11:49:11.049299654 +0000 UTC m=+1133.631771043" Nov 24 11:49:11 crc kubenswrapper[4789]: I1124 11:49:11.187381 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 11:49:11 crc kubenswrapper[4789]: I1124 11:49:11.187988 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 11:49:11 crc kubenswrapper[4789]: I1124 11:49:11.192924 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 11:49:11 crc kubenswrapper[4789]: I1124 11:49:11.198315 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 11:49:12 crc kubenswrapper[4789]: I1124 11:49:12.036440 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 11:49:12 crc kubenswrapper[4789]: I1124 11:49:12.042251 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 11:49:12 crc kubenswrapper[4789]: I1124 11:49:12.262397 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-sdbck"] Nov 24 11:49:12 crc kubenswrapper[4789]: I1124 11:49:12.263755 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-sdbck" Nov 24 11:49:12 crc kubenswrapper[4789]: I1124 11:49:12.284960 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-sdbck"] Nov 24 11:49:12 crc kubenswrapper[4789]: I1124 11:49:12.338924 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 24 11:49:12 crc kubenswrapper[4789]: I1124 11:49:12.412465 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aac75533-96ca-444e-9f80-862d3dab3959-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-sdbck\" (UID: \"aac75533-96ca-444e-9f80-862d3dab3959\") " pod="openstack/dnsmasq-dns-68d4b6d797-sdbck" Nov 24 11:49:12 crc kubenswrapper[4789]: I1124 11:49:12.412563 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aac75533-96ca-444e-9f80-862d3dab3959-config\") pod \"dnsmasq-dns-68d4b6d797-sdbck\" (UID: \"aac75533-96ca-444e-9f80-862d3dab3959\") " pod="openstack/dnsmasq-dns-68d4b6d797-sdbck" Nov 24 11:49:12 crc kubenswrapper[4789]: I1124 11:49:12.412593 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aac75533-96ca-444e-9f80-862d3dab3959-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-sdbck\" (UID: \"aac75533-96ca-444e-9f80-862d3dab3959\") " pod="openstack/dnsmasq-dns-68d4b6d797-sdbck" Nov 24 11:49:12 crc kubenswrapper[4789]: I1124 11:49:12.412636 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4bk5\" (UniqueName: \"kubernetes.io/projected/aac75533-96ca-444e-9f80-862d3dab3959-kube-api-access-f4bk5\") pod \"dnsmasq-dns-68d4b6d797-sdbck\" (UID: \"aac75533-96ca-444e-9f80-862d3dab3959\") " pod="openstack/dnsmasq-dns-68d4b6d797-sdbck" Nov 24 11:49:12 crc kubenswrapper[4789]: I1124 11:49:12.412671 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aac75533-96ca-444e-9f80-862d3dab3959-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-sdbck\" (UID: \"aac75533-96ca-444e-9f80-862d3dab3959\") " pod="openstack/dnsmasq-dns-68d4b6d797-sdbck" Nov 24 11:49:12 crc kubenswrapper[4789]: I1124 11:49:12.514308 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aac75533-96ca-444e-9f80-862d3dab3959-config\") pod \"dnsmasq-dns-68d4b6d797-sdbck\" (UID: \"aac75533-96ca-444e-9f80-862d3dab3959\") " pod="openstack/dnsmasq-dns-68d4b6d797-sdbck" Nov 24 11:49:12 crc kubenswrapper[4789]: I1124 11:49:12.514385 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aac75533-96ca-444e-9f80-862d3dab3959-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-sdbck\" (UID: \"aac75533-96ca-444e-9f80-862d3dab3959\") " pod="openstack/dnsmasq-dns-68d4b6d797-sdbck" Nov 24 11:49:12 crc kubenswrapper[4789]: I1124 11:49:12.514441 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4bk5\" (UniqueName: \"kubernetes.io/projected/aac75533-96ca-444e-9f80-862d3dab3959-kube-api-access-f4bk5\") pod \"dnsmasq-dns-68d4b6d797-sdbck\" (UID: \"aac75533-96ca-444e-9f80-862d3dab3959\") " pod="openstack/dnsmasq-dns-68d4b6d797-sdbck" Nov 24 11:49:12 crc kubenswrapper[4789]: I1124 11:49:12.514531 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aac75533-96ca-444e-9f80-862d3dab3959-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-sdbck\" (UID: \"aac75533-96ca-444e-9f80-862d3dab3959\") " pod="openstack/dnsmasq-dns-68d4b6d797-sdbck" Nov 24 11:49:12 crc kubenswrapper[4789]: I1124 11:49:12.514596 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aac75533-96ca-444e-9f80-862d3dab3959-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-sdbck\" (UID: \"aac75533-96ca-444e-9f80-862d3dab3959\") " pod="openstack/dnsmasq-dns-68d4b6d797-sdbck" Nov 24 11:49:12 crc kubenswrapper[4789]: I1124 11:49:12.515174 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aac75533-96ca-444e-9f80-862d3dab3959-config\") pod \"dnsmasq-dns-68d4b6d797-sdbck\" (UID: \"aac75533-96ca-444e-9f80-862d3dab3959\") " pod="openstack/dnsmasq-dns-68d4b6d797-sdbck" Nov 24 11:49:12 crc kubenswrapper[4789]: I1124 11:49:12.515251 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aac75533-96ca-444e-9f80-862d3dab3959-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-sdbck\" (UID: \"aac75533-96ca-444e-9f80-862d3dab3959\") " pod="openstack/dnsmasq-dns-68d4b6d797-sdbck" Nov 24 11:49:12 crc kubenswrapper[4789]: I1124 11:49:12.515507 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aac75533-96ca-444e-9f80-862d3dab3959-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-sdbck\" (UID: \"aac75533-96ca-444e-9f80-862d3dab3959\") " pod="openstack/dnsmasq-dns-68d4b6d797-sdbck" Nov 24 11:49:12 crc kubenswrapper[4789]: I1124 11:49:12.515550 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aac75533-96ca-444e-9f80-862d3dab3959-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-sdbck\" (UID: \"aac75533-96ca-444e-9f80-862d3dab3959\") " pod="openstack/dnsmasq-dns-68d4b6d797-sdbck" Nov 24 11:49:12 crc kubenswrapper[4789]: I1124 11:49:12.534894 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4bk5\" (UniqueName: \"kubernetes.io/projected/aac75533-96ca-444e-9f80-862d3dab3959-kube-api-access-f4bk5\") pod \"dnsmasq-dns-68d4b6d797-sdbck\" (UID: \"aac75533-96ca-444e-9f80-862d3dab3959\") " pod="openstack/dnsmasq-dns-68d4b6d797-sdbck" Nov 24 11:49:12 crc kubenswrapper[4789]: I1124 11:49:12.582368 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-sdbck" Nov 24 11:49:13 crc kubenswrapper[4789]: I1124 11:49:13.242347 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-sdbck"] Nov 24 11:49:13 crc kubenswrapper[4789]: W1124 11:49:13.249263 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaac75533_96ca_444e_9f80_862d3dab3959.slice/crio-6cc27eec0ae7a1ae99d7904311d401d468dc13b707949854435e5fb198e54e7f WatchSource:0}: Error finding container 6cc27eec0ae7a1ae99d7904311d401d468dc13b707949854435e5fb198e54e7f: Status 404 returned error can't find the container with id 6cc27eec0ae7a1ae99d7904311d401d468dc13b707949854435e5fb198e54e7f Nov 24 11:49:13 crc kubenswrapper[4789]: I1124 11:49:13.671066 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 11:49:13 crc kubenswrapper[4789]: I1124 11:49:13.671369 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 11:49:14 crc kubenswrapper[4789]: I1124 11:49:14.057530 4789 generic.go:334] "Generic (PLEG): container finished" podID="aac75533-96ca-444e-9f80-862d3dab3959" containerID="9c62e910e71d5ba3cb4d7a524e6458ce2586a5a4a69901cf96908dbf3bec5b48" exitCode=0 Nov 24 11:49:14 crc kubenswrapper[4789]: I1124 11:49:14.057819 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-sdbck" event={"ID":"aac75533-96ca-444e-9f80-862d3dab3959","Type":"ContainerDied","Data":"9c62e910e71d5ba3cb4d7a524e6458ce2586a5a4a69901cf96908dbf3bec5b48"} Nov 24 11:49:14 crc kubenswrapper[4789]: I1124 11:49:14.058664 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-sdbck" event={"ID":"aac75533-96ca-444e-9f80-862d3dab3959","Type":"ContainerStarted","Data":"6cc27eec0ae7a1ae99d7904311d401d468dc13b707949854435e5fb198e54e7f"} Nov 24 11:49:14 crc kubenswrapper[4789]: I1124 11:49:14.404202 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:49:14 crc kubenswrapper[4789]: I1124 11:49:14.645784 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:49:14 crc kubenswrapper[4789]: I1124 11:49:14.939819 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:49:14 crc kubenswrapper[4789]: I1124 11:49:14.940372 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="968bf2c8-b168-45f2-87ef-54a0b2564ba9" containerName="ceilometer-central-agent" containerID="cri-o://9952208fe0c1f2807d5bf77cd17c3a1fedcf1324934092621005c60b7e9e8716" gracePeriod=30 Nov 24 11:49:14 crc kubenswrapper[4789]: I1124 11:49:14.940845 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="968bf2c8-b168-45f2-87ef-54a0b2564ba9" containerName="proxy-httpd" containerID="cri-o://3c6cb512d6bf4619b539605606b271c410ba231f1d56f845a7fb30a92bc3683c" gracePeriod=30 Nov 24 11:49:14 crc kubenswrapper[4789]: I1124 11:49:14.940956 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="968bf2c8-b168-45f2-87ef-54a0b2564ba9" containerName="sg-core" containerID="cri-o://c949dd8e76491d1fe693fe8aea21f95942fc67dd3627eb9cec437afae648290d" gracePeriod=30 Nov 24 11:49:14 crc kubenswrapper[4789]: I1124 11:49:14.941043 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="968bf2c8-b168-45f2-87ef-54a0b2564ba9" containerName="ceilometer-notification-agent" containerID="cri-o://ec59c599637614868bf7569d77e99b64af521f88112bbfa62a365b0e456aee68" gracePeriod=30 Nov 24 11:49:15 crc kubenswrapper[4789]: I1124 11:49:15.074086 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-sdbck" event={"ID":"aac75533-96ca-444e-9f80-862d3dab3959","Type":"ContainerStarted","Data":"5209db0bfe8965166e880c4a785d854b2f57b620556bf42fa836d4e12cf34859"} Nov 24 11:49:15 crc kubenswrapper[4789]: I1124 11:49:15.074347 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68d4b6d797-sdbck" Nov 24 11:49:15 crc kubenswrapper[4789]: I1124 11:49:15.080609 4789 generic.go:334] "Generic (PLEG): container finished" podID="968bf2c8-b168-45f2-87ef-54a0b2564ba9" containerID="c949dd8e76491d1fe693fe8aea21f95942fc67dd3627eb9cec437afae648290d" exitCode=2 Nov 24 11:49:15 crc kubenswrapper[4789]: I1124 11:49:15.080934 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b865603a-95b2-43d4-8735-107d5e594b19" containerName="nova-api-log" containerID="cri-o://cf6aa2f7b6bd29c88e5362cbddfebb86adcd08236e7f18163014685daf05ac4c" gracePeriod=30 Nov 24 11:49:15 crc kubenswrapper[4789]: I1124 11:49:15.080673 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"968bf2c8-b168-45f2-87ef-54a0b2564ba9","Type":"ContainerDied","Data":"c949dd8e76491d1fe693fe8aea21f95942fc67dd3627eb9cec437afae648290d"} Nov 24 11:49:15 crc kubenswrapper[4789]: I1124 11:49:15.081175 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b865603a-95b2-43d4-8735-107d5e594b19" containerName="nova-api-api" containerID="cri-o://5a7da1ddb9d556b07dc1cf787e6824e1dfa1e6093d5cf9bf4a9fb941d3a5f095" gracePeriod=30 Nov 24 11:49:15 crc kubenswrapper[4789]: I1124 11:49:15.977371 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.009261 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68d4b6d797-sdbck" podStartSLOduration=4.009239665 podStartE2EDuration="4.009239665s" podCreationTimestamp="2025-11-24 11:49:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:49:15.101263908 +0000 UTC m=+1137.683735287" watchObservedRunningTime="2025-11-24 11:49:16.009239665 +0000 UTC m=+1138.591711044" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.089648 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/968bf2c8-b168-45f2-87ef-54a0b2564ba9-scripts\") pod \"968bf2c8-b168-45f2-87ef-54a0b2564ba9\" (UID: \"968bf2c8-b168-45f2-87ef-54a0b2564ba9\") " Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.090078 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/968bf2c8-b168-45f2-87ef-54a0b2564ba9-combined-ca-bundle\") pod \"968bf2c8-b168-45f2-87ef-54a0b2564ba9\" (UID: \"968bf2c8-b168-45f2-87ef-54a0b2564ba9\") " Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.090120 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/968bf2c8-b168-45f2-87ef-54a0b2564ba9-log-httpd\") pod \"968bf2c8-b168-45f2-87ef-54a0b2564ba9\" (UID: \"968bf2c8-b168-45f2-87ef-54a0b2564ba9\") " Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.090214 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/968bf2c8-b168-45f2-87ef-54a0b2564ba9-config-data\") pod \"968bf2c8-b168-45f2-87ef-54a0b2564ba9\" (UID: \"968bf2c8-b168-45f2-87ef-54a0b2564ba9\") " Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.090293 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/968bf2c8-b168-45f2-87ef-54a0b2564ba9-run-httpd\") pod \"968bf2c8-b168-45f2-87ef-54a0b2564ba9\" (UID: \"968bf2c8-b168-45f2-87ef-54a0b2564ba9\") " Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.090356 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/968bf2c8-b168-45f2-87ef-54a0b2564ba9-ceilometer-tls-certs\") pod \"968bf2c8-b168-45f2-87ef-54a0b2564ba9\" (UID: \"968bf2c8-b168-45f2-87ef-54a0b2564ba9\") " Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.090390 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxgwr\" (UniqueName: \"kubernetes.io/projected/968bf2c8-b168-45f2-87ef-54a0b2564ba9-kube-api-access-pxgwr\") pod \"968bf2c8-b168-45f2-87ef-54a0b2564ba9\" (UID: \"968bf2c8-b168-45f2-87ef-54a0b2564ba9\") " Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.090443 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/968bf2c8-b168-45f2-87ef-54a0b2564ba9-sg-core-conf-yaml\") pod \"968bf2c8-b168-45f2-87ef-54a0b2564ba9\" (UID: \"968bf2c8-b168-45f2-87ef-54a0b2564ba9\") " Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.090751 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/968bf2c8-b168-45f2-87ef-54a0b2564ba9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "968bf2c8-b168-45f2-87ef-54a0b2564ba9" (UID: "968bf2c8-b168-45f2-87ef-54a0b2564ba9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.091126 4789 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/968bf2c8-b168-45f2-87ef-54a0b2564ba9-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.091253 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/968bf2c8-b168-45f2-87ef-54a0b2564ba9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "968bf2c8-b168-45f2-87ef-54a0b2564ba9" (UID: "968bf2c8-b168-45f2-87ef-54a0b2564ba9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.097670 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/968bf2c8-b168-45f2-87ef-54a0b2564ba9-scripts" (OuterVolumeSpecName: "scripts") pod "968bf2c8-b168-45f2-87ef-54a0b2564ba9" (UID: "968bf2c8-b168-45f2-87ef-54a0b2564ba9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.099564 4789 generic.go:334] "Generic (PLEG): container finished" podID="968bf2c8-b168-45f2-87ef-54a0b2564ba9" containerID="3c6cb512d6bf4619b539605606b271c410ba231f1d56f845a7fb30a92bc3683c" exitCode=0 Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.099607 4789 generic.go:334] "Generic (PLEG): container finished" podID="968bf2c8-b168-45f2-87ef-54a0b2564ba9" containerID="ec59c599637614868bf7569d77e99b64af521f88112bbfa62a365b0e456aee68" exitCode=0 Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.099618 4789 generic.go:334] "Generic (PLEG): container finished" podID="968bf2c8-b168-45f2-87ef-54a0b2564ba9" containerID="9952208fe0c1f2807d5bf77cd17c3a1fedcf1324934092621005c60b7e9e8716" exitCode=0 Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.099736 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.100618 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"968bf2c8-b168-45f2-87ef-54a0b2564ba9","Type":"ContainerDied","Data":"3c6cb512d6bf4619b539605606b271c410ba231f1d56f845a7fb30a92bc3683c"} Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.100654 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"968bf2c8-b168-45f2-87ef-54a0b2564ba9","Type":"ContainerDied","Data":"ec59c599637614868bf7569d77e99b64af521f88112bbfa62a365b0e456aee68"} Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.100669 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"968bf2c8-b168-45f2-87ef-54a0b2564ba9","Type":"ContainerDied","Data":"9952208fe0c1f2807d5bf77cd17c3a1fedcf1324934092621005c60b7e9e8716"} Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.100684 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"968bf2c8-b168-45f2-87ef-54a0b2564ba9","Type":"ContainerDied","Data":"708d997ad0016292ee5dce6e0a1b10313024745f3946fcfa263f268d7f72e2cc"} Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.100702 4789 scope.go:117] "RemoveContainer" containerID="3c6cb512d6bf4619b539605606b271c410ba231f1d56f845a7fb30a92bc3683c" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.114469 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/968bf2c8-b168-45f2-87ef-54a0b2564ba9-kube-api-access-pxgwr" (OuterVolumeSpecName: "kube-api-access-pxgwr") pod "968bf2c8-b168-45f2-87ef-54a0b2564ba9" (UID: "968bf2c8-b168-45f2-87ef-54a0b2564ba9"). InnerVolumeSpecName "kube-api-access-pxgwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.115902 4789 generic.go:334] "Generic (PLEG): container finished" podID="b865603a-95b2-43d4-8735-107d5e594b19" containerID="cf6aa2f7b6bd29c88e5362cbddfebb86adcd08236e7f18163014685daf05ac4c" exitCode=143 Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.116531 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b865603a-95b2-43d4-8735-107d5e594b19","Type":"ContainerDied","Data":"cf6aa2f7b6bd29c88e5362cbddfebb86adcd08236e7f18163014685daf05ac4c"} Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.126880 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/968bf2c8-b168-45f2-87ef-54a0b2564ba9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "968bf2c8-b168-45f2-87ef-54a0b2564ba9" (UID: "968bf2c8-b168-45f2-87ef-54a0b2564ba9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.162619 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/968bf2c8-b168-45f2-87ef-54a0b2564ba9-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "968bf2c8-b168-45f2-87ef-54a0b2564ba9" (UID: "968bf2c8-b168-45f2-87ef-54a0b2564ba9"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.192752 4789 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/968bf2c8-b168-45f2-87ef-54a0b2564ba9-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.192783 4789 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/968bf2c8-b168-45f2-87ef-54a0b2564ba9-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.192793 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxgwr\" (UniqueName: \"kubernetes.io/projected/968bf2c8-b168-45f2-87ef-54a0b2564ba9-kube-api-access-pxgwr\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.192803 4789 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/968bf2c8-b168-45f2-87ef-54a0b2564ba9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.192811 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/968bf2c8-b168-45f2-87ef-54a0b2564ba9-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.195170 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/968bf2c8-b168-45f2-87ef-54a0b2564ba9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "968bf2c8-b168-45f2-87ef-54a0b2564ba9" (UID: "968bf2c8-b168-45f2-87ef-54a0b2564ba9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.230873 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/968bf2c8-b168-45f2-87ef-54a0b2564ba9-config-data" (OuterVolumeSpecName: "config-data") pod "968bf2c8-b168-45f2-87ef-54a0b2564ba9" (UID: "968bf2c8-b168-45f2-87ef-54a0b2564ba9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.276799 4789 scope.go:117] "RemoveContainer" containerID="c949dd8e76491d1fe693fe8aea21f95942fc67dd3627eb9cec437afae648290d" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.294309 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/968bf2c8-b168-45f2-87ef-54a0b2564ba9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.294335 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/968bf2c8-b168-45f2-87ef-54a0b2564ba9-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.295151 4789 scope.go:117] "RemoveContainer" containerID="ec59c599637614868bf7569d77e99b64af521f88112bbfa62a365b0e456aee68" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.315772 4789 scope.go:117] "RemoveContainer" containerID="9952208fe0c1f2807d5bf77cd17c3a1fedcf1324934092621005c60b7e9e8716" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.339925 4789 scope.go:117] "RemoveContainer" containerID="3c6cb512d6bf4619b539605606b271c410ba231f1d56f845a7fb30a92bc3683c" Nov 24 11:49:16 crc kubenswrapper[4789]: E1124 11:49:16.340546 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c6cb512d6bf4619b539605606b271c410ba231f1d56f845a7fb30a92bc3683c\": container with ID starting with 3c6cb512d6bf4619b539605606b271c410ba231f1d56f845a7fb30a92bc3683c not found: ID does not exist" containerID="3c6cb512d6bf4619b539605606b271c410ba231f1d56f845a7fb30a92bc3683c" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.340581 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c6cb512d6bf4619b539605606b271c410ba231f1d56f845a7fb30a92bc3683c"} err="failed to get container status \"3c6cb512d6bf4619b539605606b271c410ba231f1d56f845a7fb30a92bc3683c\": rpc error: code = NotFound desc = could not find container \"3c6cb512d6bf4619b539605606b271c410ba231f1d56f845a7fb30a92bc3683c\": container with ID starting with 3c6cb512d6bf4619b539605606b271c410ba231f1d56f845a7fb30a92bc3683c not found: ID does not exist" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.340607 4789 scope.go:117] "RemoveContainer" containerID="c949dd8e76491d1fe693fe8aea21f95942fc67dd3627eb9cec437afae648290d" Nov 24 11:49:16 crc kubenswrapper[4789]: E1124 11:49:16.342628 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c949dd8e76491d1fe693fe8aea21f95942fc67dd3627eb9cec437afae648290d\": container with ID starting with c949dd8e76491d1fe693fe8aea21f95942fc67dd3627eb9cec437afae648290d not found: ID does not exist" containerID="c949dd8e76491d1fe693fe8aea21f95942fc67dd3627eb9cec437afae648290d" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.342676 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c949dd8e76491d1fe693fe8aea21f95942fc67dd3627eb9cec437afae648290d"} err="failed to get container status \"c949dd8e76491d1fe693fe8aea21f95942fc67dd3627eb9cec437afae648290d\": rpc error: code = NotFound desc = could not find container \"c949dd8e76491d1fe693fe8aea21f95942fc67dd3627eb9cec437afae648290d\": container with ID starting with c949dd8e76491d1fe693fe8aea21f95942fc67dd3627eb9cec437afae648290d not found: ID does not exist" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.342706 4789 scope.go:117] "RemoveContainer" containerID="ec59c599637614868bf7569d77e99b64af521f88112bbfa62a365b0e456aee68" Nov 24 11:49:16 crc kubenswrapper[4789]: E1124 11:49:16.342979 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec59c599637614868bf7569d77e99b64af521f88112bbfa62a365b0e456aee68\": container with ID starting with ec59c599637614868bf7569d77e99b64af521f88112bbfa62a365b0e456aee68 not found: ID does not exist" containerID="ec59c599637614868bf7569d77e99b64af521f88112bbfa62a365b0e456aee68" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.343007 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec59c599637614868bf7569d77e99b64af521f88112bbfa62a365b0e456aee68"} err="failed to get container status \"ec59c599637614868bf7569d77e99b64af521f88112bbfa62a365b0e456aee68\": rpc error: code = NotFound desc = could not find container \"ec59c599637614868bf7569d77e99b64af521f88112bbfa62a365b0e456aee68\": container with ID starting with ec59c599637614868bf7569d77e99b64af521f88112bbfa62a365b0e456aee68 not found: ID does not exist" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.343036 4789 scope.go:117] "RemoveContainer" containerID="9952208fe0c1f2807d5bf77cd17c3a1fedcf1324934092621005c60b7e9e8716" Nov 24 11:49:16 crc kubenswrapper[4789]: E1124 11:49:16.345637 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9952208fe0c1f2807d5bf77cd17c3a1fedcf1324934092621005c60b7e9e8716\": container with ID starting with 9952208fe0c1f2807d5bf77cd17c3a1fedcf1324934092621005c60b7e9e8716 not found: ID does not exist" containerID="9952208fe0c1f2807d5bf77cd17c3a1fedcf1324934092621005c60b7e9e8716" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.345697 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9952208fe0c1f2807d5bf77cd17c3a1fedcf1324934092621005c60b7e9e8716"} err="failed to get container status \"9952208fe0c1f2807d5bf77cd17c3a1fedcf1324934092621005c60b7e9e8716\": rpc error: code = NotFound desc = could not find container \"9952208fe0c1f2807d5bf77cd17c3a1fedcf1324934092621005c60b7e9e8716\": container with ID starting with 9952208fe0c1f2807d5bf77cd17c3a1fedcf1324934092621005c60b7e9e8716 not found: ID does not exist" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.345717 4789 scope.go:117] "RemoveContainer" containerID="3c6cb512d6bf4619b539605606b271c410ba231f1d56f845a7fb30a92bc3683c" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.346276 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c6cb512d6bf4619b539605606b271c410ba231f1d56f845a7fb30a92bc3683c"} err="failed to get container status \"3c6cb512d6bf4619b539605606b271c410ba231f1d56f845a7fb30a92bc3683c\": rpc error: code = NotFound desc = could not find container \"3c6cb512d6bf4619b539605606b271c410ba231f1d56f845a7fb30a92bc3683c\": container with ID starting with 3c6cb512d6bf4619b539605606b271c410ba231f1d56f845a7fb30a92bc3683c not found: ID does not exist" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.346297 4789 scope.go:117] "RemoveContainer" containerID="c949dd8e76491d1fe693fe8aea21f95942fc67dd3627eb9cec437afae648290d" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.346625 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c949dd8e76491d1fe693fe8aea21f95942fc67dd3627eb9cec437afae648290d"} err="failed to get container status \"c949dd8e76491d1fe693fe8aea21f95942fc67dd3627eb9cec437afae648290d\": rpc error: code = NotFound desc = could not find container \"c949dd8e76491d1fe693fe8aea21f95942fc67dd3627eb9cec437afae648290d\": container with ID starting with c949dd8e76491d1fe693fe8aea21f95942fc67dd3627eb9cec437afae648290d not found: ID does not exist" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.346645 4789 scope.go:117] "RemoveContainer" containerID="ec59c599637614868bf7569d77e99b64af521f88112bbfa62a365b0e456aee68" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.346909 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec59c599637614868bf7569d77e99b64af521f88112bbfa62a365b0e456aee68"} err="failed to get container status \"ec59c599637614868bf7569d77e99b64af521f88112bbfa62a365b0e456aee68\": rpc error: code = NotFound desc = could not find container \"ec59c599637614868bf7569d77e99b64af521f88112bbfa62a365b0e456aee68\": container with ID starting with ec59c599637614868bf7569d77e99b64af521f88112bbfa62a365b0e456aee68 not found: ID does not exist" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.346929 4789 scope.go:117] "RemoveContainer" containerID="9952208fe0c1f2807d5bf77cd17c3a1fedcf1324934092621005c60b7e9e8716" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.347179 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9952208fe0c1f2807d5bf77cd17c3a1fedcf1324934092621005c60b7e9e8716"} err="failed to get container status \"9952208fe0c1f2807d5bf77cd17c3a1fedcf1324934092621005c60b7e9e8716\": rpc error: code = NotFound desc = could not find container \"9952208fe0c1f2807d5bf77cd17c3a1fedcf1324934092621005c60b7e9e8716\": container with ID starting with 9952208fe0c1f2807d5bf77cd17c3a1fedcf1324934092621005c60b7e9e8716 not found: ID does not exist" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.347199 4789 scope.go:117] "RemoveContainer" containerID="3c6cb512d6bf4619b539605606b271c410ba231f1d56f845a7fb30a92bc3683c" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.347438 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c6cb512d6bf4619b539605606b271c410ba231f1d56f845a7fb30a92bc3683c"} err="failed to get container status \"3c6cb512d6bf4619b539605606b271c410ba231f1d56f845a7fb30a92bc3683c\": rpc error: code = NotFound desc = could not find container \"3c6cb512d6bf4619b539605606b271c410ba231f1d56f845a7fb30a92bc3683c\": container with ID starting with 3c6cb512d6bf4619b539605606b271c410ba231f1d56f845a7fb30a92bc3683c not found: ID does not exist" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.347469 4789 scope.go:117] "RemoveContainer" containerID="c949dd8e76491d1fe693fe8aea21f95942fc67dd3627eb9cec437afae648290d" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.347694 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c949dd8e76491d1fe693fe8aea21f95942fc67dd3627eb9cec437afae648290d"} err="failed to get container status \"c949dd8e76491d1fe693fe8aea21f95942fc67dd3627eb9cec437afae648290d\": rpc error: code = NotFound desc = could not find container \"c949dd8e76491d1fe693fe8aea21f95942fc67dd3627eb9cec437afae648290d\": container with ID starting with c949dd8e76491d1fe693fe8aea21f95942fc67dd3627eb9cec437afae648290d not found: ID does not exist" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.347712 4789 scope.go:117] "RemoveContainer" containerID="ec59c599637614868bf7569d77e99b64af521f88112bbfa62a365b0e456aee68" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.347934 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec59c599637614868bf7569d77e99b64af521f88112bbfa62a365b0e456aee68"} err="failed to get container status \"ec59c599637614868bf7569d77e99b64af521f88112bbfa62a365b0e456aee68\": rpc error: code = NotFound desc = could not find container \"ec59c599637614868bf7569d77e99b64af521f88112bbfa62a365b0e456aee68\": container with ID starting with ec59c599637614868bf7569d77e99b64af521f88112bbfa62a365b0e456aee68 not found: ID does not exist" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.347952 4789 scope.go:117] "RemoveContainer" containerID="9952208fe0c1f2807d5bf77cd17c3a1fedcf1324934092621005c60b7e9e8716" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.348166 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9952208fe0c1f2807d5bf77cd17c3a1fedcf1324934092621005c60b7e9e8716"} err="failed to get container status \"9952208fe0c1f2807d5bf77cd17c3a1fedcf1324934092621005c60b7e9e8716\": rpc error: code = NotFound desc = could not find container \"9952208fe0c1f2807d5bf77cd17c3a1fedcf1324934092621005c60b7e9e8716\": container with ID starting with 9952208fe0c1f2807d5bf77cd17c3a1fedcf1324934092621005c60b7e9e8716 not found: ID does not exist" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.433225 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.444257 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.452855 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:49:16 crc kubenswrapper[4789]: E1124 11:49:16.453225 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="968bf2c8-b168-45f2-87ef-54a0b2564ba9" containerName="ceilometer-notification-agent" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.453242 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="968bf2c8-b168-45f2-87ef-54a0b2564ba9" containerName="ceilometer-notification-agent" Nov 24 11:49:16 crc kubenswrapper[4789]: E1124 11:49:16.453259 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="968bf2c8-b168-45f2-87ef-54a0b2564ba9" containerName="sg-core" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.453266 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="968bf2c8-b168-45f2-87ef-54a0b2564ba9" containerName="sg-core" Nov 24 11:49:16 crc kubenswrapper[4789]: E1124 11:49:16.453293 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="968bf2c8-b168-45f2-87ef-54a0b2564ba9" containerName="proxy-httpd" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.453299 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="968bf2c8-b168-45f2-87ef-54a0b2564ba9" containerName="proxy-httpd" Nov 24 11:49:16 crc kubenswrapper[4789]: E1124 11:49:16.453314 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="968bf2c8-b168-45f2-87ef-54a0b2564ba9" containerName="ceilometer-central-agent" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.453320 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="968bf2c8-b168-45f2-87ef-54a0b2564ba9" containerName="ceilometer-central-agent" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.453497 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="968bf2c8-b168-45f2-87ef-54a0b2564ba9" containerName="ceilometer-notification-agent" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.453513 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="968bf2c8-b168-45f2-87ef-54a0b2564ba9" containerName="ceilometer-central-agent" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.453525 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="968bf2c8-b168-45f2-87ef-54a0b2564ba9" containerName="sg-core" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.453549 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="968bf2c8-b168-45f2-87ef-54a0b2564ba9" containerName="proxy-httpd" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.455010 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.457413 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.457574 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.457845 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.480045 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.598290 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c9f2fa6-041c-485c-a636-af6766444f89-run-httpd\") pod \"ceilometer-0\" (UID: \"0c9f2fa6-041c-485c-a636-af6766444f89\") " pod="openstack/ceilometer-0" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.598355 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpp9k\" (UniqueName: \"kubernetes.io/projected/0c9f2fa6-041c-485c-a636-af6766444f89-kube-api-access-rpp9k\") pod \"ceilometer-0\" (UID: \"0c9f2fa6-041c-485c-a636-af6766444f89\") " pod="openstack/ceilometer-0" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.598393 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c9f2fa6-041c-485c-a636-af6766444f89-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0c9f2fa6-041c-485c-a636-af6766444f89\") " pod="openstack/ceilometer-0" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.598502 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c9f2fa6-041c-485c-a636-af6766444f89-scripts\") pod \"ceilometer-0\" (UID: \"0c9f2fa6-041c-485c-a636-af6766444f89\") " pod="openstack/ceilometer-0" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.598587 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c9f2fa6-041c-485c-a636-af6766444f89-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0c9f2fa6-041c-485c-a636-af6766444f89\") " pod="openstack/ceilometer-0" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.598612 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c9f2fa6-041c-485c-a636-af6766444f89-log-httpd\") pod \"ceilometer-0\" (UID: \"0c9f2fa6-041c-485c-a636-af6766444f89\") " pod="openstack/ceilometer-0" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.598653 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c9f2fa6-041c-485c-a636-af6766444f89-config-data\") pod \"ceilometer-0\" (UID: \"0c9f2fa6-041c-485c-a636-af6766444f89\") " pod="openstack/ceilometer-0" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.598678 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c9f2fa6-041c-485c-a636-af6766444f89-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0c9f2fa6-041c-485c-a636-af6766444f89\") " pod="openstack/ceilometer-0" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.700959 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c9f2fa6-041c-485c-a636-af6766444f89-scripts\") pod \"ceilometer-0\" (UID: \"0c9f2fa6-041c-485c-a636-af6766444f89\") " pod="openstack/ceilometer-0" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.701057 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c9f2fa6-041c-485c-a636-af6766444f89-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0c9f2fa6-041c-485c-a636-af6766444f89\") " pod="openstack/ceilometer-0" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.701087 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c9f2fa6-041c-485c-a636-af6766444f89-log-httpd\") pod \"ceilometer-0\" (UID: \"0c9f2fa6-041c-485c-a636-af6766444f89\") " pod="openstack/ceilometer-0" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.701129 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c9f2fa6-041c-485c-a636-af6766444f89-config-data\") pod \"ceilometer-0\" (UID: \"0c9f2fa6-041c-485c-a636-af6766444f89\") " pod="openstack/ceilometer-0" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.701149 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c9f2fa6-041c-485c-a636-af6766444f89-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0c9f2fa6-041c-485c-a636-af6766444f89\") " pod="openstack/ceilometer-0" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.701209 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c9f2fa6-041c-485c-a636-af6766444f89-run-httpd\") pod \"ceilometer-0\" (UID: \"0c9f2fa6-041c-485c-a636-af6766444f89\") " pod="openstack/ceilometer-0" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.701245 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpp9k\" (UniqueName: \"kubernetes.io/projected/0c9f2fa6-041c-485c-a636-af6766444f89-kube-api-access-rpp9k\") pod \"ceilometer-0\" (UID: \"0c9f2fa6-041c-485c-a636-af6766444f89\") " pod="openstack/ceilometer-0" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.701283 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c9f2fa6-041c-485c-a636-af6766444f89-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0c9f2fa6-041c-485c-a636-af6766444f89\") " pod="openstack/ceilometer-0" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.703260 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c9f2fa6-041c-485c-a636-af6766444f89-run-httpd\") pod \"ceilometer-0\" (UID: \"0c9f2fa6-041c-485c-a636-af6766444f89\") " pod="openstack/ceilometer-0" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.703896 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c9f2fa6-041c-485c-a636-af6766444f89-log-httpd\") pod \"ceilometer-0\" (UID: \"0c9f2fa6-041c-485c-a636-af6766444f89\") " pod="openstack/ceilometer-0" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.705112 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c9f2fa6-041c-485c-a636-af6766444f89-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0c9f2fa6-041c-485c-a636-af6766444f89\") " pod="openstack/ceilometer-0" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.705234 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c9f2fa6-041c-485c-a636-af6766444f89-scripts\") pod \"ceilometer-0\" (UID: \"0c9f2fa6-041c-485c-a636-af6766444f89\") " pod="openstack/ceilometer-0" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.706996 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c9f2fa6-041c-485c-a636-af6766444f89-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0c9f2fa6-041c-485c-a636-af6766444f89\") " pod="openstack/ceilometer-0" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.708830 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c9f2fa6-041c-485c-a636-af6766444f89-config-data\") pod \"ceilometer-0\" (UID: \"0c9f2fa6-041c-485c-a636-af6766444f89\") " pod="openstack/ceilometer-0" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.711017 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c9f2fa6-041c-485c-a636-af6766444f89-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0c9f2fa6-041c-485c-a636-af6766444f89\") " pod="openstack/ceilometer-0" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.725285 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpp9k\" (UniqueName: \"kubernetes.io/projected/0c9f2fa6-041c-485c-a636-af6766444f89-kube-api-access-rpp9k\") pod \"ceilometer-0\" (UID: \"0c9f2fa6-041c-485c-a636-af6766444f89\") " pod="openstack/ceilometer-0" Nov 24 11:49:16 crc kubenswrapper[4789]: I1124 11:49:16.772210 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:49:17 crc kubenswrapper[4789]: I1124 11:49:17.320939 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:49:18 crc kubenswrapper[4789]: I1124 11:49:18.140115 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c9f2fa6-041c-485c-a636-af6766444f89","Type":"ContainerStarted","Data":"d231fa7985172ad6a032a65947eb1369a633adf72f832750aa3b7ce5ac07d6ba"} Nov 24 11:49:18 crc kubenswrapper[4789]: I1124 11:49:18.140603 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c9f2fa6-041c-485c-a636-af6766444f89","Type":"ContainerStarted","Data":"a0fc99ec95632652174280b8ef0882784834c75ce3892511a7a3993fff838dd0"} Nov 24 11:49:18 crc kubenswrapper[4789]: I1124 11:49:18.196202 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="968bf2c8-b168-45f2-87ef-54a0b2564ba9" path="/var/lib/kubelet/pods/968bf2c8-b168-45f2-87ef-54a0b2564ba9/volumes" Nov 24 11:49:18 crc kubenswrapper[4789]: I1124 11:49:18.670864 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 11:49:18 crc kubenswrapper[4789]: I1124 11:49:18.671278 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 11:49:18 crc kubenswrapper[4789]: I1124 11:49:18.735784 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 11:49:18 crc kubenswrapper[4789]: I1124 11:49:18.878520 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b865603a-95b2-43d4-8735-107d5e594b19-config-data\") pod \"b865603a-95b2-43d4-8735-107d5e594b19\" (UID: \"b865603a-95b2-43d4-8735-107d5e594b19\") " Nov 24 11:49:18 crc kubenswrapper[4789]: I1124 11:49:18.878615 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pww7x\" (UniqueName: \"kubernetes.io/projected/b865603a-95b2-43d4-8735-107d5e594b19-kube-api-access-pww7x\") pod \"b865603a-95b2-43d4-8735-107d5e594b19\" (UID: \"b865603a-95b2-43d4-8735-107d5e594b19\") " Nov 24 11:49:18 crc kubenswrapper[4789]: I1124 11:49:18.878644 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b865603a-95b2-43d4-8735-107d5e594b19-combined-ca-bundle\") pod \"b865603a-95b2-43d4-8735-107d5e594b19\" (UID: \"b865603a-95b2-43d4-8735-107d5e594b19\") " Nov 24 11:49:18 crc kubenswrapper[4789]: I1124 11:49:18.878674 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b865603a-95b2-43d4-8735-107d5e594b19-logs\") pod \"b865603a-95b2-43d4-8735-107d5e594b19\" (UID: \"b865603a-95b2-43d4-8735-107d5e594b19\") " Nov 24 11:49:18 crc kubenswrapper[4789]: I1124 11:49:18.879219 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b865603a-95b2-43d4-8735-107d5e594b19-logs" (OuterVolumeSpecName: "logs") pod "b865603a-95b2-43d4-8735-107d5e594b19" (UID: "b865603a-95b2-43d4-8735-107d5e594b19"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:49:18 crc kubenswrapper[4789]: I1124 11:49:18.900902 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b865603a-95b2-43d4-8735-107d5e594b19-kube-api-access-pww7x" (OuterVolumeSpecName: "kube-api-access-pww7x") pod "b865603a-95b2-43d4-8735-107d5e594b19" (UID: "b865603a-95b2-43d4-8735-107d5e594b19"). InnerVolumeSpecName "kube-api-access-pww7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:49:18 crc kubenswrapper[4789]: I1124 11:49:18.909497 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b865603a-95b2-43d4-8735-107d5e594b19-config-data" (OuterVolumeSpecName: "config-data") pod "b865603a-95b2-43d4-8735-107d5e594b19" (UID: "b865603a-95b2-43d4-8735-107d5e594b19"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:49:18 crc kubenswrapper[4789]: I1124 11:49:18.983626 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b865603a-95b2-43d4-8735-107d5e594b19-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:18 crc kubenswrapper[4789]: I1124 11:49:18.983661 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pww7x\" (UniqueName: \"kubernetes.io/projected/b865603a-95b2-43d4-8735-107d5e594b19-kube-api-access-pww7x\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:18 crc kubenswrapper[4789]: I1124 11:49:18.983677 4789 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b865603a-95b2-43d4-8735-107d5e594b19-logs\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.002507 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b865603a-95b2-43d4-8735-107d5e594b19-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b865603a-95b2-43d4-8735-107d5e594b19" (UID: "b865603a-95b2-43d4-8735-107d5e594b19"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.085711 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b865603a-95b2-43d4-8735-107d5e594b19-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.167054 4789 generic.go:334] "Generic (PLEG): container finished" podID="b865603a-95b2-43d4-8735-107d5e594b19" containerID="5a7da1ddb9d556b07dc1cf787e6824e1dfa1e6093d5cf9bf4a9fb941d3a5f095" exitCode=0 Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.167159 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b865603a-95b2-43d4-8735-107d5e594b19","Type":"ContainerDied","Data":"5a7da1ddb9d556b07dc1cf787e6824e1dfa1e6093d5cf9bf4a9fb941d3a5f095"} Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.167222 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b865603a-95b2-43d4-8735-107d5e594b19","Type":"ContainerDied","Data":"f4cc7c1f3f4ef2b13c8431065350bd9fbffec812ff4815b5c6a8224b094b383a"} Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.167241 4789 scope.go:117] "RemoveContainer" containerID="5a7da1ddb9d556b07dc1cf787e6824e1dfa1e6093d5cf9bf4a9fb941d3a5f095" Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.167740 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.171442 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c9f2fa6-041c-485c-a636-af6766444f89","Type":"ContainerStarted","Data":"22af81a4b56ad45a95de9035116204ee773a9f0dab789d84dea1049b4c4f9b27"} Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.201784 4789 scope.go:117] "RemoveContainer" containerID="cf6aa2f7b6bd29c88e5362cbddfebb86adcd08236e7f18163014685daf05ac4c" Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.209713 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.217449 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.221903 4789 scope.go:117] "RemoveContainer" containerID="5a7da1ddb9d556b07dc1cf787e6824e1dfa1e6093d5cf9bf4a9fb941d3a5f095" Nov 24 11:49:19 crc kubenswrapper[4789]: E1124 11:49:19.222751 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a7da1ddb9d556b07dc1cf787e6824e1dfa1e6093d5cf9bf4a9fb941d3a5f095\": container with ID starting with 5a7da1ddb9d556b07dc1cf787e6824e1dfa1e6093d5cf9bf4a9fb941d3a5f095 not found: ID does not exist" containerID="5a7da1ddb9d556b07dc1cf787e6824e1dfa1e6093d5cf9bf4a9fb941d3a5f095" Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.222796 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a7da1ddb9d556b07dc1cf787e6824e1dfa1e6093d5cf9bf4a9fb941d3a5f095"} err="failed to get container status \"5a7da1ddb9d556b07dc1cf787e6824e1dfa1e6093d5cf9bf4a9fb941d3a5f095\": rpc error: code = NotFound desc = could not find container \"5a7da1ddb9d556b07dc1cf787e6824e1dfa1e6093d5cf9bf4a9fb941d3a5f095\": container with ID starting with 5a7da1ddb9d556b07dc1cf787e6824e1dfa1e6093d5cf9bf4a9fb941d3a5f095 not found: ID does not exist" Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.222821 4789 scope.go:117] "RemoveContainer" containerID="cf6aa2f7b6bd29c88e5362cbddfebb86adcd08236e7f18163014685daf05ac4c" Nov 24 11:49:19 crc kubenswrapper[4789]: E1124 11:49:19.224009 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf6aa2f7b6bd29c88e5362cbddfebb86adcd08236e7f18163014685daf05ac4c\": container with ID starting with cf6aa2f7b6bd29c88e5362cbddfebb86adcd08236e7f18163014685daf05ac4c not found: ID does not exist" containerID="cf6aa2f7b6bd29c88e5362cbddfebb86adcd08236e7f18163014685daf05ac4c" Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.224050 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf6aa2f7b6bd29c88e5362cbddfebb86adcd08236e7f18163014685daf05ac4c"} err="failed to get container status \"cf6aa2f7b6bd29c88e5362cbddfebb86adcd08236e7f18163014685daf05ac4c\": rpc error: code = NotFound desc = could not find container \"cf6aa2f7b6bd29c88e5362cbddfebb86adcd08236e7f18163014685daf05ac4c\": container with ID starting with cf6aa2f7b6bd29c88e5362cbddfebb86adcd08236e7f18163014685daf05ac4c not found: ID does not exist" Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.227911 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 11:49:19 crc kubenswrapper[4789]: E1124 11:49:19.234084 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b865603a-95b2-43d4-8735-107d5e594b19" containerName="nova-api-log" Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.234177 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="b865603a-95b2-43d4-8735-107d5e594b19" containerName="nova-api-log" Nov 24 11:49:19 crc kubenswrapper[4789]: E1124 11:49:19.234232 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b865603a-95b2-43d4-8735-107d5e594b19" containerName="nova-api-api" Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.234287 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="b865603a-95b2-43d4-8735-107d5e594b19" containerName="nova-api-api" Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.234509 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="b865603a-95b2-43d4-8735-107d5e594b19" containerName="nova-api-api" Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.234581 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="b865603a-95b2-43d4-8735-107d5e594b19" containerName="nova-api-log" Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.235514 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.239970 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.240248 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.240670 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.248681 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.394346 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebd7d11a-8905-495a-aa5f-9ce90d981517-logs\") pod \"nova-api-0\" (UID: \"ebd7d11a-8905-495a-aa5f-9ce90d981517\") " pod="openstack/nova-api-0" Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.394387 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebd7d11a-8905-495a-aa5f-9ce90d981517-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ebd7d11a-8905-495a-aa5f-9ce90d981517\") " pod="openstack/nova-api-0" Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.394487 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp25v\" (UniqueName: \"kubernetes.io/projected/ebd7d11a-8905-495a-aa5f-9ce90d981517-kube-api-access-dp25v\") pod \"nova-api-0\" (UID: \"ebd7d11a-8905-495a-aa5f-9ce90d981517\") " pod="openstack/nova-api-0" Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.394504 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebd7d11a-8905-495a-aa5f-9ce90d981517-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ebd7d11a-8905-495a-aa5f-9ce90d981517\") " pod="openstack/nova-api-0" Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.394584 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebd7d11a-8905-495a-aa5f-9ce90d981517-config-data\") pod \"nova-api-0\" (UID: \"ebd7d11a-8905-495a-aa5f-9ce90d981517\") " pod="openstack/nova-api-0" Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.394606 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebd7d11a-8905-495a-aa5f-9ce90d981517-public-tls-certs\") pod \"nova-api-0\" (UID: \"ebd7d11a-8905-495a-aa5f-9ce90d981517\") " pod="openstack/nova-api-0" Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.404748 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.508880 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebd7d11a-8905-495a-aa5f-9ce90d981517-config-data\") pod \"nova-api-0\" (UID: \"ebd7d11a-8905-495a-aa5f-9ce90d981517\") " pod="openstack/nova-api-0" Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.508932 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebd7d11a-8905-495a-aa5f-9ce90d981517-public-tls-certs\") pod \"nova-api-0\" (UID: \"ebd7d11a-8905-495a-aa5f-9ce90d981517\") " pod="openstack/nova-api-0" Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.509025 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebd7d11a-8905-495a-aa5f-9ce90d981517-logs\") pod \"nova-api-0\" (UID: \"ebd7d11a-8905-495a-aa5f-9ce90d981517\") " pod="openstack/nova-api-0" Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.509049 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebd7d11a-8905-495a-aa5f-9ce90d981517-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ebd7d11a-8905-495a-aa5f-9ce90d981517\") " pod="openstack/nova-api-0" Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.509179 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp25v\" (UniqueName: \"kubernetes.io/projected/ebd7d11a-8905-495a-aa5f-9ce90d981517-kube-api-access-dp25v\") pod \"nova-api-0\" (UID: \"ebd7d11a-8905-495a-aa5f-9ce90d981517\") " pod="openstack/nova-api-0" Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.509212 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebd7d11a-8905-495a-aa5f-9ce90d981517-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ebd7d11a-8905-495a-aa5f-9ce90d981517\") " pod="openstack/nova-api-0" Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.510141 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebd7d11a-8905-495a-aa5f-9ce90d981517-logs\") pod \"nova-api-0\" (UID: \"ebd7d11a-8905-495a-aa5f-9ce90d981517\") " pod="openstack/nova-api-0" Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.512410 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebd7d11a-8905-495a-aa5f-9ce90d981517-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ebd7d11a-8905-495a-aa5f-9ce90d981517\") " pod="openstack/nova-api-0" Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.514381 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebd7d11a-8905-495a-aa5f-9ce90d981517-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ebd7d11a-8905-495a-aa5f-9ce90d981517\") " pod="openstack/nova-api-0" Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.514942 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebd7d11a-8905-495a-aa5f-9ce90d981517-config-data\") pod \"nova-api-0\" (UID: \"ebd7d11a-8905-495a-aa5f-9ce90d981517\") " pod="openstack/nova-api-0" Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.520644 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebd7d11a-8905-495a-aa5f-9ce90d981517-public-tls-certs\") pod \"nova-api-0\" (UID: \"ebd7d11a-8905-495a-aa5f-9ce90d981517\") " pod="openstack/nova-api-0" Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.541900 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp25v\" (UniqueName: \"kubernetes.io/projected/ebd7d11a-8905-495a-aa5f-9ce90d981517-kube-api-access-dp25v\") pod \"nova-api-0\" (UID: \"ebd7d11a-8905-495a-aa5f-9ce90d981517\") " pod="openstack/nova-api-0" Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.590251 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.693626 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9c375501-c3aa-4a6e-b0bc-9991f2d56b37" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.180:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.693741 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9c375501-c3aa-4a6e-b0bc-9991f2d56b37" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.180:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 11:49:19 crc kubenswrapper[4789]: I1124 11:49:19.714836 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:49:20 crc kubenswrapper[4789]: I1124 11:49:20.183102 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b865603a-95b2-43d4-8735-107d5e594b19" path="/var/lib/kubelet/pods/b865603a-95b2-43d4-8735-107d5e594b19/volumes" Nov 24 11:49:20 crc kubenswrapper[4789]: I1124 11:49:20.190992 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c9f2fa6-041c-485c-a636-af6766444f89","Type":"ContainerStarted","Data":"794b65e8a95a90dfa1e940a084295df5457baf8f5b8560e622db52d993e4bfab"} Nov 24 11:49:20 crc kubenswrapper[4789]: I1124 11:49:20.221867 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:49:20 crc kubenswrapper[4789]: I1124 11:49:20.251706 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:49:20 crc kubenswrapper[4789]: I1124 11:49:20.528201 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-pqz4s"] Nov 24 11:49:20 crc kubenswrapper[4789]: I1124 11:49:20.529318 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-pqz4s" Nov 24 11:49:20 crc kubenswrapper[4789]: I1124 11:49:20.539682 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 24 11:49:20 crc kubenswrapper[4789]: I1124 11:49:20.540037 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 24 11:49:20 crc kubenswrapper[4789]: I1124 11:49:20.564045 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-pqz4s"] Nov 24 11:49:20 crc kubenswrapper[4789]: I1124 11:49:20.640262 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/419ba329-785c-4647-b1c9-cb366aaaea48-scripts\") pod \"nova-cell1-cell-mapping-pqz4s\" (UID: \"419ba329-785c-4647-b1c9-cb366aaaea48\") " pod="openstack/nova-cell1-cell-mapping-pqz4s" Nov 24 11:49:20 crc kubenswrapper[4789]: I1124 11:49:20.640674 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/419ba329-785c-4647-b1c9-cb366aaaea48-config-data\") pod \"nova-cell1-cell-mapping-pqz4s\" (UID: \"419ba329-785c-4647-b1c9-cb366aaaea48\") " pod="openstack/nova-cell1-cell-mapping-pqz4s" Nov 24 11:49:20 crc kubenswrapper[4789]: I1124 11:49:20.641942 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxprc\" (UniqueName: \"kubernetes.io/projected/419ba329-785c-4647-b1c9-cb366aaaea48-kube-api-access-wxprc\") pod \"nova-cell1-cell-mapping-pqz4s\" (UID: \"419ba329-785c-4647-b1c9-cb366aaaea48\") " pod="openstack/nova-cell1-cell-mapping-pqz4s" Nov 24 11:49:20 crc kubenswrapper[4789]: I1124 11:49:20.641989 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/419ba329-785c-4647-b1c9-cb366aaaea48-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-pqz4s\" (UID: \"419ba329-785c-4647-b1c9-cb366aaaea48\") " pod="openstack/nova-cell1-cell-mapping-pqz4s" Nov 24 11:49:20 crc kubenswrapper[4789]: I1124 11:49:20.744190 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/419ba329-785c-4647-b1c9-cb366aaaea48-scripts\") pod \"nova-cell1-cell-mapping-pqz4s\" (UID: \"419ba329-785c-4647-b1c9-cb366aaaea48\") " pod="openstack/nova-cell1-cell-mapping-pqz4s" Nov 24 11:49:20 crc kubenswrapper[4789]: I1124 11:49:20.744238 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/419ba329-785c-4647-b1c9-cb366aaaea48-config-data\") pod \"nova-cell1-cell-mapping-pqz4s\" (UID: \"419ba329-785c-4647-b1c9-cb366aaaea48\") " pod="openstack/nova-cell1-cell-mapping-pqz4s" Nov 24 11:49:20 crc kubenswrapper[4789]: I1124 11:49:20.744310 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxprc\" (UniqueName: \"kubernetes.io/projected/419ba329-785c-4647-b1c9-cb366aaaea48-kube-api-access-wxprc\") pod \"nova-cell1-cell-mapping-pqz4s\" (UID: \"419ba329-785c-4647-b1c9-cb366aaaea48\") " pod="openstack/nova-cell1-cell-mapping-pqz4s" Nov 24 11:49:20 crc kubenswrapper[4789]: I1124 11:49:20.744335 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/419ba329-785c-4647-b1c9-cb366aaaea48-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-pqz4s\" (UID: \"419ba329-785c-4647-b1c9-cb366aaaea48\") " pod="openstack/nova-cell1-cell-mapping-pqz4s" Nov 24 11:49:20 crc kubenswrapper[4789]: I1124 11:49:20.752276 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/419ba329-785c-4647-b1c9-cb366aaaea48-config-data\") pod \"nova-cell1-cell-mapping-pqz4s\" (UID: \"419ba329-785c-4647-b1c9-cb366aaaea48\") " pod="openstack/nova-cell1-cell-mapping-pqz4s" Nov 24 11:49:20 crc kubenswrapper[4789]: I1124 11:49:20.752452 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/419ba329-785c-4647-b1c9-cb366aaaea48-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-pqz4s\" (UID: \"419ba329-785c-4647-b1c9-cb366aaaea48\") " pod="openstack/nova-cell1-cell-mapping-pqz4s" Nov 24 11:49:20 crc kubenswrapper[4789]: I1124 11:49:20.755294 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/419ba329-785c-4647-b1c9-cb366aaaea48-scripts\") pod \"nova-cell1-cell-mapping-pqz4s\" (UID: \"419ba329-785c-4647-b1c9-cb366aaaea48\") " pod="openstack/nova-cell1-cell-mapping-pqz4s" Nov 24 11:49:20 crc kubenswrapper[4789]: I1124 11:49:20.767323 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxprc\" (UniqueName: \"kubernetes.io/projected/419ba329-785c-4647-b1c9-cb366aaaea48-kube-api-access-wxprc\") pod \"nova-cell1-cell-mapping-pqz4s\" (UID: \"419ba329-785c-4647-b1c9-cb366aaaea48\") " pod="openstack/nova-cell1-cell-mapping-pqz4s" Nov 24 11:49:20 crc kubenswrapper[4789]: I1124 11:49:20.897093 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-pqz4s" Nov 24 11:49:21 crc kubenswrapper[4789]: I1124 11:49:21.222125 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ebd7d11a-8905-495a-aa5f-9ce90d981517","Type":"ContainerStarted","Data":"4c123b41123db5dd34be50ec4b9d6d28699cc4ae9d87f8de1a875575af3885bb"} Nov 24 11:49:21 crc kubenswrapper[4789]: I1124 11:49:21.222435 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ebd7d11a-8905-495a-aa5f-9ce90d981517","Type":"ContainerStarted","Data":"6d7f77ffdff2490d82eeeb3437bffa52238c166bbabf570ea7032585554d716d"} Nov 24 11:49:21 crc kubenswrapper[4789]: I1124 11:49:21.222447 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ebd7d11a-8905-495a-aa5f-9ce90d981517","Type":"ContainerStarted","Data":"67792e6a0e598859e2dbab6994ed3f4494e19dfe405021e392f7087de2698d95"} Nov 24 11:49:21 crc kubenswrapper[4789]: I1124 11:49:21.229655 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c9f2fa6-041c-485c-a636-af6766444f89","Type":"ContainerStarted","Data":"187614053ac71be10c14de7cb4c0f8b3295db7297b2542454d9c214d33723015"} Nov 24 11:49:21 crc kubenswrapper[4789]: I1124 11:49:21.229689 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 11:49:21 crc kubenswrapper[4789]: I1124 11:49:21.254668 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.254645135 podStartE2EDuration="2.254645135s" podCreationTimestamp="2025-11-24 11:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:49:21.245623898 +0000 UTC m=+1143.828095297" watchObservedRunningTime="2025-11-24 11:49:21.254645135 +0000 UTC m=+1143.837116504" Nov 24 11:49:21 crc kubenswrapper[4789]: I1124 11:49:21.278224 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.008762387 podStartE2EDuration="5.278202635s" podCreationTimestamp="2025-11-24 11:49:16 +0000 UTC" firstStartedPulling="2025-11-24 11:49:17.329250472 +0000 UTC m=+1139.911721861" lastFinishedPulling="2025-11-24 11:49:20.59869073 +0000 UTC m=+1143.181162109" observedRunningTime="2025-11-24 11:49:21.266067212 +0000 UTC m=+1143.848538591" watchObservedRunningTime="2025-11-24 11:49:21.278202635 +0000 UTC m=+1143.860674004" Nov 24 11:49:21 crc kubenswrapper[4789]: I1124 11:49:21.459332 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-pqz4s"] Nov 24 11:49:21 crc kubenswrapper[4789]: W1124 11:49:21.460014 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod419ba329_785c_4647_b1c9_cb366aaaea48.slice/crio-5817cddd41c3fa6aab0c179496835721160928e822842758506b50ec626c1079 WatchSource:0}: Error finding container 5817cddd41c3fa6aab0c179496835721160928e822842758506b50ec626c1079: Status 404 returned error can't find the container with id 5817cddd41c3fa6aab0c179496835721160928e822842758506b50ec626c1079 Nov 24 11:49:22 crc kubenswrapper[4789]: I1124 11:49:22.238790 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-pqz4s" event={"ID":"419ba329-785c-4647-b1c9-cb366aaaea48","Type":"ContainerStarted","Data":"e5a00590bf0e7a113b98e8e5ff242d4ed17147f3562cfb82c01ba559ae88fd96"} Nov 24 11:49:22 crc kubenswrapper[4789]: I1124 11:49:22.239247 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-pqz4s" event={"ID":"419ba329-785c-4647-b1c9-cb366aaaea48","Type":"ContainerStarted","Data":"5817cddd41c3fa6aab0c179496835721160928e822842758506b50ec626c1079"} Nov 24 11:49:22 crc kubenswrapper[4789]: I1124 11:49:22.266524 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-pqz4s" podStartSLOduration=2.266504774 podStartE2EDuration="2.266504774s" podCreationTimestamp="2025-11-24 11:49:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:49:22.258678385 +0000 UTC m=+1144.841149774" watchObservedRunningTime="2025-11-24 11:49:22.266504774 +0000 UTC m=+1144.848976163" Nov 24 11:49:22 crc kubenswrapper[4789]: I1124 11:49:22.583745 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68d4b6d797-sdbck" Nov 24 11:49:22 crc kubenswrapper[4789]: I1124 11:49:22.688992 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-4dgpk"] Nov 24 11:49:22 crc kubenswrapper[4789]: I1124 11:49:22.689234 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b8cf6657-4dgpk" podUID="234d181f-edd2-40e2-9c4f-683c28176a4a" containerName="dnsmasq-dns" containerID="cri-o://007f3a8dce0bd7dfc3a683dfbc04b21b28fc2dade6a75ed6b12401eaa382ce0e" gracePeriod=10 Nov 24 11:49:23 crc kubenswrapper[4789]: I1124 11:49:23.269914 4789 generic.go:334] "Generic (PLEG): container finished" podID="234d181f-edd2-40e2-9c4f-683c28176a4a" containerID="007f3a8dce0bd7dfc3a683dfbc04b21b28fc2dade6a75ed6b12401eaa382ce0e" exitCode=0 Nov 24 11:49:23 crc kubenswrapper[4789]: I1124 11:49:23.270690 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-4dgpk" event={"ID":"234d181f-edd2-40e2-9c4f-683c28176a4a","Type":"ContainerDied","Data":"007f3a8dce0bd7dfc3a683dfbc04b21b28fc2dade6a75ed6b12401eaa382ce0e"} Nov 24 11:49:23 crc kubenswrapper[4789]: I1124 11:49:23.382093 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-4dgpk" Nov 24 11:49:23 crc kubenswrapper[4789]: I1124 11:49:23.512832 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/234d181f-edd2-40e2-9c4f-683c28176a4a-ovsdbserver-nb\") pod \"234d181f-edd2-40e2-9c4f-683c28176a4a\" (UID: \"234d181f-edd2-40e2-9c4f-683c28176a4a\") " Nov 24 11:49:23 crc kubenswrapper[4789]: I1124 11:49:23.512901 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/234d181f-edd2-40e2-9c4f-683c28176a4a-dns-svc\") pod \"234d181f-edd2-40e2-9c4f-683c28176a4a\" (UID: \"234d181f-edd2-40e2-9c4f-683c28176a4a\") " Nov 24 11:49:23 crc kubenswrapper[4789]: I1124 11:49:23.512972 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/234d181f-edd2-40e2-9c4f-683c28176a4a-config\") pod \"234d181f-edd2-40e2-9c4f-683c28176a4a\" (UID: \"234d181f-edd2-40e2-9c4f-683c28176a4a\") " Nov 24 11:49:23 crc kubenswrapper[4789]: I1124 11:49:23.513038 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtvj6\" (UniqueName: \"kubernetes.io/projected/234d181f-edd2-40e2-9c4f-683c28176a4a-kube-api-access-vtvj6\") pod \"234d181f-edd2-40e2-9c4f-683c28176a4a\" (UID: \"234d181f-edd2-40e2-9c4f-683c28176a4a\") " Nov 24 11:49:23 crc kubenswrapper[4789]: I1124 11:49:23.513070 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/234d181f-edd2-40e2-9c4f-683c28176a4a-ovsdbserver-sb\") pod \"234d181f-edd2-40e2-9c4f-683c28176a4a\" (UID: \"234d181f-edd2-40e2-9c4f-683c28176a4a\") " Nov 24 11:49:23 crc kubenswrapper[4789]: I1124 11:49:23.541653 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/234d181f-edd2-40e2-9c4f-683c28176a4a-kube-api-access-vtvj6" (OuterVolumeSpecName: "kube-api-access-vtvj6") pod "234d181f-edd2-40e2-9c4f-683c28176a4a" (UID: "234d181f-edd2-40e2-9c4f-683c28176a4a"). InnerVolumeSpecName "kube-api-access-vtvj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:49:23 crc kubenswrapper[4789]: I1124 11:49:23.581835 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/234d181f-edd2-40e2-9c4f-683c28176a4a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "234d181f-edd2-40e2-9c4f-683c28176a4a" (UID: "234d181f-edd2-40e2-9c4f-683c28176a4a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:49:23 crc kubenswrapper[4789]: I1124 11:49:23.584979 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/234d181f-edd2-40e2-9c4f-683c28176a4a-config" (OuterVolumeSpecName: "config") pod "234d181f-edd2-40e2-9c4f-683c28176a4a" (UID: "234d181f-edd2-40e2-9c4f-683c28176a4a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:49:23 crc kubenswrapper[4789]: I1124 11:49:23.589067 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/234d181f-edd2-40e2-9c4f-683c28176a4a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "234d181f-edd2-40e2-9c4f-683c28176a4a" (UID: "234d181f-edd2-40e2-9c4f-683c28176a4a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:49:23 crc kubenswrapper[4789]: I1124 11:49:23.590608 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/234d181f-edd2-40e2-9c4f-683c28176a4a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "234d181f-edd2-40e2-9c4f-683c28176a4a" (UID: "234d181f-edd2-40e2-9c4f-683c28176a4a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:49:23 crc kubenswrapper[4789]: I1124 11:49:23.614630 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/234d181f-edd2-40e2-9c4f-683c28176a4a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:23 crc kubenswrapper[4789]: I1124 11:49:23.614665 4789 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/234d181f-edd2-40e2-9c4f-683c28176a4a-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:23 crc kubenswrapper[4789]: I1124 11:49:23.614674 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/234d181f-edd2-40e2-9c4f-683c28176a4a-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:23 crc kubenswrapper[4789]: I1124 11:49:23.614683 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtvj6\" (UniqueName: \"kubernetes.io/projected/234d181f-edd2-40e2-9c4f-683c28176a4a-kube-api-access-vtvj6\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:23 crc kubenswrapper[4789]: I1124 11:49:23.614693 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/234d181f-edd2-40e2-9c4f-683c28176a4a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:24 crc kubenswrapper[4789]: I1124 11:49:24.281651 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-4dgpk" event={"ID":"234d181f-edd2-40e2-9c4f-683c28176a4a","Type":"ContainerDied","Data":"155a2fcad1bc50f9667e21441db8285fb31354a68b3f4d92bc0eeb55b179f010"} Nov 24 11:49:24 crc kubenswrapper[4789]: I1124 11:49:24.281704 4789 scope.go:117] "RemoveContainer" containerID="007f3a8dce0bd7dfc3a683dfbc04b21b28fc2dade6a75ed6b12401eaa382ce0e" Nov 24 11:49:24 crc kubenswrapper[4789]: I1124 11:49:24.281990 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-4dgpk" Nov 24 11:49:24 crc kubenswrapper[4789]: I1124 11:49:24.302157 4789 scope.go:117] "RemoveContainer" containerID="c5679323096f9ad30087ec4c4bae3cc84ec652c8f3b91f8c606c91d2ee81e7dd" Nov 24 11:49:24 crc kubenswrapper[4789]: I1124 11:49:24.314515 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-4dgpk"] Nov 24 11:49:24 crc kubenswrapper[4789]: I1124 11:49:24.320528 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-4dgpk"] Nov 24 11:49:26 crc kubenswrapper[4789]: I1124 11:49:26.179705 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="234d181f-edd2-40e2-9c4f-683c28176a4a" path="/var/lib/kubelet/pods/234d181f-edd2-40e2-9c4f-683c28176a4a/volumes" Nov 24 11:49:27 crc kubenswrapper[4789]: I1124 11:49:27.308835 4789 generic.go:334] "Generic (PLEG): container finished" podID="419ba329-785c-4647-b1c9-cb366aaaea48" containerID="e5a00590bf0e7a113b98e8e5ff242d4ed17147f3562cfb82c01ba559ae88fd96" exitCode=0 Nov 24 11:49:27 crc kubenswrapper[4789]: I1124 11:49:27.308947 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-pqz4s" event={"ID":"419ba329-785c-4647-b1c9-cb366aaaea48","Type":"ContainerDied","Data":"e5a00590bf0e7a113b98e8e5ff242d4ed17147f3562cfb82c01ba559ae88fd96"} Nov 24 11:49:28 crc kubenswrapper[4789]: I1124 11:49:28.679340 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 11:49:28 crc kubenswrapper[4789]: I1124 11:49:28.680079 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 11:49:28 crc kubenswrapper[4789]: I1124 11:49:28.688816 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-pqz4s" Nov 24 11:49:28 crc kubenswrapper[4789]: I1124 11:49:28.693512 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 11:49:28 crc kubenswrapper[4789]: I1124 11:49:28.816103 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/419ba329-785c-4647-b1c9-cb366aaaea48-combined-ca-bundle\") pod \"419ba329-785c-4647-b1c9-cb366aaaea48\" (UID: \"419ba329-785c-4647-b1c9-cb366aaaea48\") " Nov 24 11:49:28 crc kubenswrapper[4789]: I1124 11:49:28.816167 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxprc\" (UniqueName: \"kubernetes.io/projected/419ba329-785c-4647-b1c9-cb366aaaea48-kube-api-access-wxprc\") pod \"419ba329-785c-4647-b1c9-cb366aaaea48\" (UID: \"419ba329-785c-4647-b1c9-cb366aaaea48\") " Nov 24 11:49:28 crc kubenswrapper[4789]: I1124 11:49:28.816248 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/419ba329-785c-4647-b1c9-cb366aaaea48-config-data\") pod \"419ba329-785c-4647-b1c9-cb366aaaea48\" (UID: \"419ba329-785c-4647-b1c9-cb366aaaea48\") " Nov 24 11:49:28 crc kubenswrapper[4789]: I1124 11:49:28.816331 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/419ba329-785c-4647-b1c9-cb366aaaea48-scripts\") pod \"419ba329-785c-4647-b1c9-cb366aaaea48\" (UID: \"419ba329-785c-4647-b1c9-cb366aaaea48\") " Nov 24 11:49:28 crc kubenswrapper[4789]: I1124 11:49:28.826263 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/419ba329-785c-4647-b1c9-cb366aaaea48-kube-api-access-wxprc" (OuterVolumeSpecName: "kube-api-access-wxprc") pod "419ba329-785c-4647-b1c9-cb366aaaea48" (UID: "419ba329-785c-4647-b1c9-cb366aaaea48"). InnerVolumeSpecName "kube-api-access-wxprc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:49:28 crc kubenswrapper[4789]: I1124 11:49:28.835131 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/419ba329-785c-4647-b1c9-cb366aaaea48-scripts" (OuterVolumeSpecName: "scripts") pod "419ba329-785c-4647-b1c9-cb366aaaea48" (UID: "419ba329-785c-4647-b1c9-cb366aaaea48"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:49:28 crc kubenswrapper[4789]: I1124 11:49:28.846385 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/419ba329-785c-4647-b1c9-cb366aaaea48-config-data" (OuterVolumeSpecName: "config-data") pod "419ba329-785c-4647-b1c9-cb366aaaea48" (UID: "419ba329-785c-4647-b1c9-cb366aaaea48"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:49:28 crc kubenswrapper[4789]: I1124 11:49:28.849709 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/419ba329-785c-4647-b1c9-cb366aaaea48-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "419ba329-785c-4647-b1c9-cb366aaaea48" (UID: "419ba329-785c-4647-b1c9-cb366aaaea48"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:49:28 crc kubenswrapper[4789]: I1124 11:49:28.918759 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/419ba329-785c-4647-b1c9-cb366aaaea48-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:28 crc kubenswrapper[4789]: I1124 11:49:28.918786 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxprc\" (UniqueName: \"kubernetes.io/projected/419ba329-785c-4647-b1c9-cb366aaaea48-kube-api-access-wxprc\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:28 crc kubenswrapper[4789]: I1124 11:49:28.918797 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/419ba329-785c-4647-b1c9-cb366aaaea48-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:28 crc kubenswrapper[4789]: I1124 11:49:28.918804 4789 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/419ba329-785c-4647-b1c9-cb366aaaea48-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:29 crc kubenswrapper[4789]: I1124 11:49:29.329983 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-pqz4s" Nov 24 11:49:29 crc kubenswrapper[4789]: I1124 11:49:29.329989 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-pqz4s" event={"ID":"419ba329-785c-4647-b1c9-cb366aaaea48","Type":"ContainerDied","Data":"5817cddd41c3fa6aab0c179496835721160928e822842758506b50ec626c1079"} Nov 24 11:49:29 crc kubenswrapper[4789]: I1124 11:49:29.330072 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5817cddd41c3fa6aab0c179496835721160928e822842758506b50ec626c1079" Nov 24 11:49:29 crc kubenswrapper[4789]: I1124 11:49:29.339079 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 11:49:29 crc kubenswrapper[4789]: I1124 11:49:29.592045 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 11:49:29 crc kubenswrapper[4789]: I1124 11:49:29.592493 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 11:49:29 crc kubenswrapper[4789]: I1124 11:49:29.619989 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:49:29 crc kubenswrapper[4789]: I1124 11:49:29.620231 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="9be05943-90fe-4fef-9251-3b8cce1b1d70" containerName="nova-scheduler-scheduler" containerID="cri-o://9dc519d6c3db0f1cffd78871d47fdc601c22f70e5db7d7070b8e75ee755e5e4a" gracePeriod=30 Nov 24 11:49:29 crc kubenswrapper[4789]: I1124 11:49:29.634223 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:49:29 crc kubenswrapper[4789]: I1124 11:49:29.714177 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:49:30 crc kubenswrapper[4789]: E1124 11:49:30.118819 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9dc519d6c3db0f1cffd78871d47fdc601c22f70e5db7d7070b8e75ee755e5e4a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 11:49:30 crc kubenswrapper[4789]: E1124 11:49:30.120249 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9dc519d6c3db0f1cffd78871d47fdc601c22f70e5db7d7070b8e75ee755e5e4a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 11:49:30 crc kubenswrapper[4789]: E1124 11:49:30.121306 4789 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9dc519d6c3db0f1cffd78871d47fdc601c22f70e5db7d7070b8e75ee755e5e4a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 11:49:30 crc kubenswrapper[4789]: E1124 11:49:30.121341 4789 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="9be05943-90fe-4fef-9251-3b8cce1b1d70" containerName="nova-scheduler-scheduler" Nov 24 11:49:30 crc kubenswrapper[4789]: I1124 11:49:30.605785 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ebd7d11a-8905-495a-aa5f-9ce90d981517" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.184:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 11:49:30 crc kubenswrapper[4789]: I1124 11:49:30.605823 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ebd7d11a-8905-495a-aa5f-9ce90d981517" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.184:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 11:49:31 crc kubenswrapper[4789]: I1124 11:49:31.343431 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9c375501-c3aa-4a6e-b0bc-9991f2d56b37" containerName="nova-metadata-log" containerID="cri-o://322f2fd4eb319c83d67e3ef438925e87b3ce9d1182a18eda865e4dc0e44b474b" gracePeriod=30 Nov 24 11:49:31 crc kubenswrapper[4789]: I1124 11:49:31.343517 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9c375501-c3aa-4a6e-b0bc-9991f2d56b37" containerName="nova-metadata-metadata" containerID="cri-o://fa3f7d71da62548169a95a9ce2014b7881d8182caa7a3a6f0adc683bd1ffd228" gracePeriod=30 Nov 24 11:49:31 crc kubenswrapper[4789]: I1124 11:49:31.343556 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ebd7d11a-8905-495a-aa5f-9ce90d981517" containerName="nova-api-log" containerID="cri-o://6d7f77ffdff2490d82eeeb3437bffa52238c166bbabf570ea7032585554d716d" gracePeriod=30 Nov 24 11:49:31 crc kubenswrapper[4789]: I1124 11:49:31.343654 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ebd7d11a-8905-495a-aa5f-9ce90d981517" containerName="nova-api-api" containerID="cri-o://4c123b41123db5dd34be50ec4b9d6d28699cc4ae9d87f8de1a875575af3885bb" gracePeriod=30 Nov 24 11:49:32 crc kubenswrapper[4789]: I1124 11:49:32.361099 4789 generic.go:334] "Generic (PLEG): container finished" podID="9c375501-c3aa-4a6e-b0bc-9991f2d56b37" containerID="322f2fd4eb319c83d67e3ef438925e87b3ce9d1182a18eda865e4dc0e44b474b" exitCode=143 Nov 24 11:49:32 crc kubenswrapper[4789]: I1124 11:49:32.361211 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9c375501-c3aa-4a6e-b0bc-9991f2d56b37","Type":"ContainerDied","Data":"322f2fd4eb319c83d67e3ef438925e87b3ce9d1182a18eda865e4dc0e44b474b"} Nov 24 11:49:32 crc kubenswrapper[4789]: I1124 11:49:32.367239 4789 generic.go:334] "Generic (PLEG): container finished" podID="ebd7d11a-8905-495a-aa5f-9ce90d981517" containerID="6d7f77ffdff2490d82eeeb3437bffa52238c166bbabf570ea7032585554d716d" exitCode=143 Nov 24 11:49:32 crc kubenswrapper[4789]: I1124 11:49:32.367282 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ebd7d11a-8905-495a-aa5f-9ce90d981517","Type":"ContainerDied","Data":"6d7f77ffdff2490d82eeeb3437bffa52238c166bbabf570ea7032585554d716d"} Nov 24 11:49:34 crc kubenswrapper[4789]: I1124 11:49:34.296934 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 11:49:34 crc kubenswrapper[4789]: I1124 11:49:34.385339 4789 generic.go:334] "Generic (PLEG): container finished" podID="9be05943-90fe-4fef-9251-3b8cce1b1d70" containerID="9dc519d6c3db0f1cffd78871d47fdc601c22f70e5db7d7070b8e75ee755e5e4a" exitCode=0 Nov 24 11:49:34 crc kubenswrapper[4789]: I1124 11:49:34.385385 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9be05943-90fe-4fef-9251-3b8cce1b1d70","Type":"ContainerDied","Data":"9dc519d6c3db0f1cffd78871d47fdc601c22f70e5db7d7070b8e75ee755e5e4a"} Nov 24 11:49:34 crc kubenswrapper[4789]: I1124 11:49:34.385405 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 11:49:34 crc kubenswrapper[4789]: I1124 11:49:34.385422 4789 scope.go:117] "RemoveContainer" containerID="9dc519d6c3db0f1cffd78871d47fdc601c22f70e5db7d7070b8e75ee755e5e4a" Nov 24 11:49:34 crc kubenswrapper[4789]: I1124 11:49:34.385410 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9be05943-90fe-4fef-9251-3b8cce1b1d70","Type":"ContainerDied","Data":"a60550446a4a1769c95d23bdef74b32d52837024a81956ec73fc9afdb9863294"} Nov 24 11:49:34 crc kubenswrapper[4789]: I1124 11:49:34.395170 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be05943-90fe-4fef-9251-3b8cce1b1d70-config-data\") pod \"9be05943-90fe-4fef-9251-3b8cce1b1d70\" (UID: \"9be05943-90fe-4fef-9251-3b8cce1b1d70\") " Nov 24 11:49:34 crc kubenswrapper[4789]: I1124 11:49:34.395241 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be05943-90fe-4fef-9251-3b8cce1b1d70-combined-ca-bundle\") pod \"9be05943-90fe-4fef-9251-3b8cce1b1d70\" (UID: \"9be05943-90fe-4fef-9251-3b8cce1b1d70\") " Nov 24 11:49:34 crc kubenswrapper[4789]: I1124 11:49:34.395410 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhgvq\" (UniqueName: \"kubernetes.io/projected/9be05943-90fe-4fef-9251-3b8cce1b1d70-kube-api-access-mhgvq\") pod \"9be05943-90fe-4fef-9251-3b8cce1b1d70\" (UID: \"9be05943-90fe-4fef-9251-3b8cce1b1d70\") " Nov 24 11:49:34 crc kubenswrapper[4789]: I1124 11:49:34.418811 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9be05943-90fe-4fef-9251-3b8cce1b1d70-kube-api-access-mhgvq" (OuterVolumeSpecName: "kube-api-access-mhgvq") pod "9be05943-90fe-4fef-9251-3b8cce1b1d70" (UID: "9be05943-90fe-4fef-9251-3b8cce1b1d70"). InnerVolumeSpecName "kube-api-access-mhgvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:49:34 crc kubenswrapper[4789]: I1124 11:49:34.419008 4789 scope.go:117] "RemoveContainer" containerID="9dc519d6c3db0f1cffd78871d47fdc601c22f70e5db7d7070b8e75ee755e5e4a" Nov 24 11:49:34 crc kubenswrapper[4789]: E1124 11:49:34.419695 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dc519d6c3db0f1cffd78871d47fdc601c22f70e5db7d7070b8e75ee755e5e4a\": container with ID starting with 9dc519d6c3db0f1cffd78871d47fdc601c22f70e5db7d7070b8e75ee755e5e4a not found: ID does not exist" containerID="9dc519d6c3db0f1cffd78871d47fdc601c22f70e5db7d7070b8e75ee755e5e4a" Nov 24 11:49:34 crc kubenswrapper[4789]: I1124 11:49:34.419739 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dc519d6c3db0f1cffd78871d47fdc601c22f70e5db7d7070b8e75ee755e5e4a"} err="failed to get container status \"9dc519d6c3db0f1cffd78871d47fdc601c22f70e5db7d7070b8e75ee755e5e4a\": rpc error: code = NotFound desc = could not find container \"9dc519d6c3db0f1cffd78871d47fdc601c22f70e5db7d7070b8e75ee755e5e4a\": container with ID starting with 9dc519d6c3db0f1cffd78871d47fdc601c22f70e5db7d7070b8e75ee755e5e4a not found: ID does not exist" Nov 24 11:49:34 crc kubenswrapper[4789]: I1124 11:49:34.424225 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9be05943-90fe-4fef-9251-3b8cce1b1d70-config-data" (OuterVolumeSpecName: "config-data") pod "9be05943-90fe-4fef-9251-3b8cce1b1d70" (UID: "9be05943-90fe-4fef-9251-3b8cce1b1d70"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:49:34 crc kubenswrapper[4789]: I1124 11:49:34.425745 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9be05943-90fe-4fef-9251-3b8cce1b1d70-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9be05943-90fe-4fef-9251-3b8cce1b1d70" (UID: "9be05943-90fe-4fef-9251-3b8cce1b1d70"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:49:34 crc kubenswrapper[4789]: I1124 11:49:34.497156 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be05943-90fe-4fef-9251-3b8cce1b1d70-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:34 crc kubenswrapper[4789]: I1124 11:49:34.497184 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be05943-90fe-4fef-9251-3b8cce1b1d70-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:34 crc kubenswrapper[4789]: I1124 11:49:34.497195 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhgvq\" (UniqueName: \"kubernetes.io/projected/9be05943-90fe-4fef-9251-3b8cce1b1d70-kube-api-access-mhgvq\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:34 crc kubenswrapper[4789]: I1124 11:49:34.499128 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="9c375501-c3aa-4a6e-b0bc-9991f2d56b37" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.180:8775/\": read tcp 10.217.0.2:34776->10.217.0.180:8775: read: connection reset by peer" Nov 24 11:49:34 crc kubenswrapper[4789]: I1124 11:49:34.499682 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="9c375501-c3aa-4a6e-b0bc-9991f2d56b37" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.180:8775/\": read tcp 10.217.0.2:34760->10.217.0.180:8775: read: connection reset by peer" Nov 24 11:49:34 crc kubenswrapper[4789]: I1124 11:49:34.727600 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:49:34 crc kubenswrapper[4789]: I1124 11:49:34.735957 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:49:34 crc kubenswrapper[4789]: I1124 11:49:34.765934 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:49:34 crc kubenswrapper[4789]: E1124 11:49:34.774494 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9be05943-90fe-4fef-9251-3b8cce1b1d70" containerName="nova-scheduler-scheduler" Nov 24 11:49:34 crc kubenswrapper[4789]: I1124 11:49:34.774539 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="9be05943-90fe-4fef-9251-3b8cce1b1d70" containerName="nova-scheduler-scheduler" Nov 24 11:49:34 crc kubenswrapper[4789]: E1124 11:49:34.774586 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="234d181f-edd2-40e2-9c4f-683c28176a4a" containerName="init" Nov 24 11:49:34 crc kubenswrapper[4789]: I1124 11:49:34.774594 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="234d181f-edd2-40e2-9c4f-683c28176a4a" containerName="init" Nov 24 11:49:34 crc kubenswrapper[4789]: E1124 11:49:34.774613 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="419ba329-785c-4647-b1c9-cb366aaaea48" containerName="nova-manage" Nov 24 11:49:34 crc kubenswrapper[4789]: I1124 11:49:34.774622 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="419ba329-785c-4647-b1c9-cb366aaaea48" containerName="nova-manage" Nov 24 11:49:34 crc kubenswrapper[4789]: E1124 11:49:34.774640 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="234d181f-edd2-40e2-9c4f-683c28176a4a" containerName="dnsmasq-dns" Nov 24 11:49:34 crc kubenswrapper[4789]: I1124 11:49:34.774649 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="234d181f-edd2-40e2-9c4f-683c28176a4a" containerName="dnsmasq-dns" Nov 24 11:49:34 crc kubenswrapper[4789]: I1124 11:49:34.775206 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="419ba329-785c-4647-b1c9-cb366aaaea48" containerName="nova-manage" Nov 24 11:49:34 crc kubenswrapper[4789]: I1124 11:49:34.775241 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="234d181f-edd2-40e2-9c4f-683c28176a4a" containerName="dnsmasq-dns" Nov 24 11:49:34 crc kubenswrapper[4789]: I1124 11:49:34.775258 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="9be05943-90fe-4fef-9251-3b8cce1b1d70" containerName="nova-scheduler-scheduler" Nov 24 11:49:34 crc kubenswrapper[4789]: I1124 11:49:34.776199 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 11:49:34 crc kubenswrapper[4789]: I1124 11:49:34.830636 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 11:49:34 crc kubenswrapper[4789]: I1124 11:49:34.879679 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:49:34 crc kubenswrapper[4789]: I1124 11:49:34.940394 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f04a406-8a85-4850-9611-311d3229b127-config-data\") pod \"nova-scheduler-0\" (UID: \"7f04a406-8a85-4850-9611-311d3229b127\") " pod="openstack/nova-scheduler-0" Nov 24 11:49:34 crc kubenswrapper[4789]: I1124 11:49:34.940508 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjrss\" (UniqueName: \"kubernetes.io/projected/7f04a406-8a85-4850-9611-311d3229b127-kube-api-access-qjrss\") pod \"nova-scheduler-0\" (UID: \"7f04a406-8a85-4850-9611-311d3229b127\") " pod="openstack/nova-scheduler-0" Nov 24 11:49:34 crc kubenswrapper[4789]: I1124 11:49:34.940591 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f04a406-8a85-4850-9611-311d3229b127-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7f04a406-8a85-4850-9611-311d3229b127\") " pod="openstack/nova-scheduler-0" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.042469 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f04a406-8a85-4850-9611-311d3229b127-config-data\") pod \"nova-scheduler-0\" (UID: \"7f04a406-8a85-4850-9611-311d3229b127\") " pod="openstack/nova-scheduler-0" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.042520 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjrss\" (UniqueName: \"kubernetes.io/projected/7f04a406-8a85-4850-9611-311d3229b127-kube-api-access-qjrss\") pod \"nova-scheduler-0\" (UID: \"7f04a406-8a85-4850-9611-311d3229b127\") " pod="openstack/nova-scheduler-0" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.042602 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f04a406-8a85-4850-9611-311d3229b127-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7f04a406-8a85-4850-9611-311d3229b127\") " pod="openstack/nova-scheduler-0" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.052280 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f04a406-8a85-4850-9611-311d3229b127-config-data\") pod \"nova-scheduler-0\" (UID: \"7f04a406-8a85-4850-9611-311d3229b127\") " pod="openstack/nova-scheduler-0" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.055955 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f04a406-8a85-4850-9611-311d3229b127-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7f04a406-8a85-4850-9611-311d3229b127\") " pod="openstack/nova-scheduler-0" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.057913 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjrss\" (UniqueName: \"kubernetes.io/projected/7f04a406-8a85-4850-9611-311d3229b127-kube-api-access-qjrss\") pod \"nova-scheduler-0\" (UID: \"7f04a406-8a85-4850-9611-311d3229b127\") " pod="openstack/nova-scheduler-0" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.131423 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.191112 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.244909 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c375501-c3aa-4a6e-b0bc-9991f2d56b37-logs\") pod \"9c375501-c3aa-4a6e-b0bc-9991f2d56b37\" (UID: \"9c375501-c3aa-4a6e-b0bc-9991f2d56b37\") " Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.245014 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twlvq\" (UniqueName: \"kubernetes.io/projected/9c375501-c3aa-4a6e-b0bc-9991f2d56b37-kube-api-access-twlvq\") pod \"9c375501-c3aa-4a6e-b0bc-9991f2d56b37\" (UID: \"9c375501-c3aa-4a6e-b0bc-9991f2d56b37\") " Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.245062 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c375501-c3aa-4a6e-b0bc-9991f2d56b37-config-data\") pod \"9c375501-c3aa-4a6e-b0bc-9991f2d56b37\" (UID: \"9c375501-c3aa-4a6e-b0bc-9991f2d56b37\") " Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.245097 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c375501-c3aa-4a6e-b0bc-9991f2d56b37-nova-metadata-tls-certs\") pod \"9c375501-c3aa-4a6e-b0bc-9991f2d56b37\" (UID: \"9c375501-c3aa-4a6e-b0bc-9991f2d56b37\") " Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.245121 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c375501-c3aa-4a6e-b0bc-9991f2d56b37-combined-ca-bundle\") pod \"9c375501-c3aa-4a6e-b0bc-9991f2d56b37\" (UID: \"9c375501-c3aa-4a6e-b0bc-9991f2d56b37\") " Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.246555 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c375501-c3aa-4a6e-b0bc-9991f2d56b37-logs" (OuterVolumeSpecName: "logs") pod "9c375501-c3aa-4a6e-b0bc-9991f2d56b37" (UID: "9c375501-c3aa-4a6e-b0bc-9991f2d56b37"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.275005 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c375501-c3aa-4a6e-b0bc-9991f2d56b37-kube-api-access-twlvq" (OuterVolumeSpecName: "kube-api-access-twlvq") pod "9c375501-c3aa-4a6e-b0bc-9991f2d56b37" (UID: "9c375501-c3aa-4a6e-b0bc-9991f2d56b37"). InnerVolumeSpecName "kube-api-access-twlvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.286624 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c375501-c3aa-4a6e-b0bc-9991f2d56b37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c375501-c3aa-4a6e-b0bc-9991f2d56b37" (UID: "9c375501-c3aa-4a6e-b0bc-9991f2d56b37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.314901 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c375501-c3aa-4a6e-b0bc-9991f2d56b37-config-data" (OuterVolumeSpecName: "config-data") pod "9c375501-c3aa-4a6e-b0bc-9991f2d56b37" (UID: "9c375501-c3aa-4a6e-b0bc-9991f2d56b37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.324599 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c375501-c3aa-4a6e-b0bc-9991f2d56b37-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "9c375501-c3aa-4a6e-b0bc-9991f2d56b37" (UID: "9c375501-c3aa-4a6e-b0bc-9991f2d56b37"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.347955 4789 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c375501-c3aa-4a6e-b0bc-9991f2d56b37-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.348157 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c375501-c3aa-4a6e-b0bc-9991f2d56b37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.348213 4789 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c375501-c3aa-4a6e-b0bc-9991f2d56b37-logs\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.348265 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twlvq\" (UniqueName: \"kubernetes.io/projected/9c375501-c3aa-4a6e-b0bc-9991f2d56b37-kube-api-access-twlvq\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.348430 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c375501-c3aa-4a6e-b0bc-9991f2d56b37-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.399595 4789 generic.go:334] "Generic (PLEG): container finished" podID="9c375501-c3aa-4a6e-b0bc-9991f2d56b37" containerID="fa3f7d71da62548169a95a9ce2014b7881d8182caa7a3a6f0adc683bd1ffd228" exitCode=0 Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.399625 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9c375501-c3aa-4a6e-b0bc-9991f2d56b37","Type":"ContainerDied","Data":"fa3f7d71da62548169a95a9ce2014b7881d8182caa7a3a6f0adc683bd1ffd228"} Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.399646 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9c375501-c3aa-4a6e-b0bc-9991f2d56b37","Type":"ContainerDied","Data":"0106898767ce904c5278c36e8551ddbc9cd854b9815bc6c8c7cb0135a4bc649f"} Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.399662 4789 scope.go:117] "RemoveContainer" containerID="fa3f7d71da62548169a95a9ce2014b7881d8182caa7a3a6f0adc683bd1ffd228" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.399769 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.471027 4789 scope.go:117] "RemoveContainer" containerID="322f2fd4eb319c83d67e3ef438925e87b3ce9d1182a18eda865e4dc0e44b474b" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.471539 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.490269 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.499756 4789 scope.go:117] "RemoveContainer" containerID="fa3f7d71da62548169a95a9ce2014b7881d8182caa7a3a6f0adc683bd1ffd228" Nov 24 11:49:35 crc kubenswrapper[4789]: E1124 11:49:35.501244 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa3f7d71da62548169a95a9ce2014b7881d8182caa7a3a6f0adc683bd1ffd228\": container with ID starting with fa3f7d71da62548169a95a9ce2014b7881d8182caa7a3a6f0adc683bd1ffd228 not found: ID does not exist" containerID="fa3f7d71da62548169a95a9ce2014b7881d8182caa7a3a6f0adc683bd1ffd228" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.501284 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa3f7d71da62548169a95a9ce2014b7881d8182caa7a3a6f0adc683bd1ffd228"} err="failed to get container status \"fa3f7d71da62548169a95a9ce2014b7881d8182caa7a3a6f0adc683bd1ffd228\": rpc error: code = NotFound desc = could not find container \"fa3f7d71da62548169a95a9ce2014b7881d8182caa7a3a6f0adc683bd1ffd228\": container with ID starting with fa3f7d71da62548169a95a9ce2014b7881d8182caa7a3a6f0adc683bd1ffd228 not found: ID does not exist" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.501310 4789 scope.go:117] "RemoveContainer" containerID="322f2fd4eb319c83d67e3ef438925e87b3ce9d1182a18eda865e4dc0e44b474b" Nov 24 11:49:35 crc kubenswrapper[4789]: E1124 11:49:35.501658 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"322f2fd4eb319c83d67e3ef438925e87b3ce9d1182a18eda865e4dc0e44b474b\": container with ID starting with 322f2fd4eb319c83d67e3ef438925e87b3ce9d1182a18eda865e4dc0e44b474b not found: ID does not exist" containerID="322f2fd4eb319c83d67e3ef438925e87b3ce9d1182a18eda865e4dc0e44b474b" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.501690 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"322f2fd4eb319c83d67e3ef438925e87b3ce9d1182a18eda865e4dc0e44b474b"} err="failed to get container status \"322f2fd4eb319c83d67e3ef438925e87b3ce9d1182a18eda865e4dc0e44b474b\": rpc error: code = NotFound desc = could not find container \"322f2fd4eb319c83d67e3ef438925e87b3ce9d1182a18eda865e4dc0e44b474b\": container with ID starting with 322f2fd4eb319c83d67e3ef438925e87b3ce9d1182a18eda865e4dc0e44b474b not found: ID does not exist" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.505784 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:49:35 crc kubenswrapper[4789]: E1124 11:49:35.506439 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c375501-c3aa-4a6e-b0bc-9991f2d56b37" containerName="nova-metadata-metadata" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.506607 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c375501-c3aa-4a6e-b0bc-9991f2d56b37" containerName="nova-metadata-metadata" Nov 24 11:49:35 crc kubenswrapper[4789]: E1124 11:49:35.506745 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c375501-c3aa-4a6e-b0bc-9991f2d56b37" containerName="nova-metadata-log" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.506760 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c375501-c3aa-4a6e-b0bc-9991f2d56b37" containerName="nova-metadata-log" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.507145 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c375501-c3aa-4a6e-b0bc-9991f2d56b37" containerName="nova-metadata-log" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.507168 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c375501-c3aa-4a6e-b0bc-9991f2d56b37" containerName="nova-metadata-metadata" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.508585 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.510936 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.518595 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.521485 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.655334 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ca2367e-056b-4136-98ec-d53805416c09-config-data\") pod \"nova-metadata-0\" (UID: \"0ca2367e-056b-4136-98ec-d53805416c09\") " pod="openstack/nova-metadata-0" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.655438 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wxmp\" (UniqueName: \"kubernetes.io/projected/0ca2367e-056b-4136-98ec-d53805416c09-kube-api-access-4wxmp\") pod \"nova-metadata-0\" (UID: \"0ca2367e-056b-4136-98ec-d53805416c09\") " pod="openstack/nova-metadata-0" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.655492 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ca2367e-056b-4136-98ec-d53805416c09-logs\") pod \"nova-metadata-0\" (UID: \"0ca2367e-056b-4136-98ec-d53805416c09\") " pod="openstack/nova-metadata-0" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.655521 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ca2367e-056b-4136-98ec-d53805416c09-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0ca2367e-056b-4136-98ec-d53805416c09\") " pod="openstack/nova-metadata-0" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.655556 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ca2367e-056b-4136-98ec-d53805416c09-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0ca2367e-056b-4136-98ec-d53805416c09\") " pod="openstack/nova-metadata-0" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.715573 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:49:35 crc kubenswrapper[4789]: W1124 11:49:35.719498 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f04a406_8a85_4850_9611_311d3229b127.slice/crio-27adec217a158386a59b536975d36c4cc99196ca602285e7e72c04a04fb934bf WatchSource:0}: Error finding container 27adec217a158386a59b536975d36c4cc99196ca602285e7e72c04a04fb934bf: Status 404 returned error can't find the container with id 27adec217a158386a59b536975d36c4cc99196ca602285e7e72c04a04fb934bf Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.756776 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ca2367e-056b-4136-98ec-d53805416c09-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0ca2367e-056b-4136-98ec-d53805416c09\") " pod="openstack/nova-metadata-0" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.757066 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ca2367e-056b-4136-98ec-d53805416c09-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0ca2367e-056b-4136-98ec-d53805416c09\") " pod="openstack/nova-metadata-0" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.757095 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ca2367e-056b-4136-98ec-d53805416c09-config-data\") pod \"nova-metadata-0\" (UID: \"0ca2367e-056b-4136-98ec-d53805416c09\") " pod="openstack/nova-metadata-0" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.757205 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wxmp\" (UniqueName: \"kubernetes.io/projected/0ca2367e-056b-4136-98ec-d53805416c09-kube-api-access-4wxmp\") pod \"nova-metadata-0\" (UID: \"0ca2367e-056b-4136-98ec-d53805416c09\") " pod="openstack/nova-metadata-0" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.757256 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ca2367e-056b-4136-98ec-d53805416c09-logs\") pod \"nova-metadata-0\" (UID: \"0ca2367e-056b-4136-98ec-d53805416c09\") " pod="openstack/nova-metadata-0" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.757650 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ca2367e-056b-4136-98ec-d53805416c09-logs\") pod \"nova-metadata-0\" (UID: \"0ca2367e-056b-4136-98ec-d53805416c09\") " pod="openstack/nova-metadata-0" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.761004 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ca2367e-056b-4136-98ec-d53805416c09-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0ca2367e-056b-4136-98ec-d53805416c09\") " pod="openstack/nova-metadata-0" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.761177 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ca2367e-056b-4136-98ec-d53805416c09-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0ca2367e-056b-4136-98ec-d53805416c09\") " pod="openstack/nova-metadata-0" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.761586 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ca2367e-056b-4136-98ec-d53805416c09-config-data\") pod \"nova-metadata-0\" (UID: \"0ca2367e-056b-4136-98ec-d53805416c09\") " pod="openstack/nova-metadata-0" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.784846 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wxmp\" (UniqueName: \"kubernetes.io/projected/0ca2367e-056b-4136-98ec-d53805416c09-kube-api-access-4wxmp\") pod \"nova-metadata-0\" (UID: \"0ca2367e-056b-4136-98ec-d53805416c09\") " pod="openstack/nova-metadata-0" Nov 24 11:49:35 crc kubenswrapper[4789]: I1124 11:49:35.857784 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.182256 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9be05943-90fe-4fef-9251-3b8cce1b1d70" path="/var/lib/kubelet/pods/9be05943-90fe-4fef-9251-3b8cce1b1d70/volumes" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.183447 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c375501-c3aa-4a6e-b0bc-9991f2d56b37" path="/var/lib/kubelet/pods/9c375501-c3aa-4a6e-b0bc-9991f2d56b37/volumes" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.251992 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.383237 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebd7d11a-8905-495a-aa5f-9ce90d981517-internal-tls-certs\") pod \"ebd7d11a-8905-495a-aa5f-9ce90d981517\" (UID: \"ebd7d11a-8905-495a-aa5f-9ce90d981517\") " Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.383367 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebd7d11a-8905-495a-aa5f-9ce90d981517-combined-ca-bundle\") pod \"ebd7d11a-8905-495a-aa5f-9ce90d981517\" (UID: \"ebd7d11a-8905-495a-aa5f-9ce90d981517\") " Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.383486 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebd7d11a-8905-495a-aa5f-9ce90d981517-config-data\") pod \"ebd7d11a-8905-495a-aa5f-9ce90d981517\" (UID: \"ebd7d11a-8905-495a-aa5f-9ce90d981517\") " Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.383538 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebd7d11a-8905-495a-aa5f-9ce90d981517-public-tls-certs\") pod \"ebd7d11a-8905-495a-aa5f-9ce90d981517\" (UID: \"ebd7d11a-8905-495a-aa5f-9ce90d981517\") " Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.383575 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebd7d11a-8905-495a-aa5f-9ce90d981517-logs\") pod \"ebd7d11a-8905-495a-aa5f-9ce90d981517\" (UID: \"ebd7d11a-8905-495a-aa5f-9ce90d981517\") " Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.387022 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dp25v\" (UniqueName: \"kubernetes.io/projected/ebd7d11a-8905-495a-aa5f-9ce90d981517-kube-api-access-dp25v\") pod \"ebd7d11a-8905-495a-aa5f-9ce90d981517\" (UID: \"ebd7d11a-8905-495a-aa5f-9ce90d981517\") " Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.390946 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebd7d11a-8905-495a-aa5f-9ce90d981517-logs" (OuterVolumeSpecName: "logs") pod "ebd7d11a-8905-495a-aa5f-9ce90d981517" (UID: "ebd7d11a-8905-495a-aa5f-9ce90d981517"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.404751 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:49:36 crc kubenswrapper[4789]: W1124 11:49:36.408629 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ca2367e_056b_4136_98ec_d53805416c09.slice/crio-03998e9910f40853d80f3197c104d68c4d133d0adf6d96e409c8f3daa3b66f4c WatchSource:0}: Error finding container 03998e9910f40853d80f3197c104d68c4d133d0adf6d96e409c8f3daa3b66f4c: Status 404 returned error can't find the container with id 03998e9910f40853d80f3197c104d68c4d133d0adf6d96e409c8f3daa3b66f4c Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.409748 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebd7d11a-8905-495a-aa5f-9ce90d981517-kube-api-access-dp25v" (OuterVolumeSpecName: "kube-api-access-dp25v") pod "ebd7d11a-8905-495a-aa5f-9ce90d981517" (UID: "ebd7d11a-8905-495a-aa5f-9ce90d981517"). InnerVolumeSpecName "kube-api-access-dp25v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.414209 4789 generic.go:334] "Generic (PLEG): container finished" podID="ebd7d11a-8905-495a-aa5f-9ce90d981517" containerID="4c123b41123db5dd34be50ec4b9d6d28699cc4ae9d87f8de1a875575af3885bb" exitCode=0 Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.414312 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.414379 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ebd7d11a-8905-495a-aa5f-9ce90d981517","Type":"ContainerDied","Data":"4c123b41123db5dd34be50ec4b9d6d28699cc4ae9d87f8de1a875575af3885bb"} Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.414436 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ebd7d11a-8905-495a-aa5f-9ce90d981517","Type":"ContainerDied","Data":"67792e6a0e598859e2dbab6994ed3f4494e19dfe405021e392f7087de2698d95"} Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.414472 4789 scope.go:117] "RemoveContainer" containerID="4c123b41123db5dd34be50ec4b9d6d28699cc4ae9d87f8de1a875575af3885bb" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.420857 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7f04a406-8a85-4850-9611-311d3229b127","Type":"ContainerStarted","Data":"f5e58547d39b1afbab05946fc48cef8e1c52aa6f8a65c3ad5d7ce9abbf80bd2f"} Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.420896 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7f04a406-8a85-4850-9611-311d3229b127","Type":"ContainerStarted","Data":"27adec217a158386a59b536975d36c4cc99196ca602285e7e72c04a04fb934bf"} Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.430946 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebd7d11a-8905-495a-aa5f-9ce90d981517-config-data" (OuterVolumeSpecName: "config-data") pod "ebd7d11a-8905-495a-aa5f-9ce90d981517" (UID: "ebd7d11a-8905-495a-aa5f-9ce90d981517"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.444306 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.444289758 podStartE2EDuration="2.444289758s" podCreationTimestamp="2025-11-24 11:49:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:49:36.43691786 +0000 UTC m=+1159.019389249" watchObservedRunningTime="2025-11-24 11:49:36.444289758 +0000 UTC m=+1159.026761147" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.476322 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebd7d11a-8905-495a-aa5f-9ce90d981517-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ebd7d11a-8905-495a-aa5f-9ce90d981517" (UID: "ebd7d11a-8905-495a-aa5f-9ce90d981517"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.478789 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebd7d11a-8905-495a-aa5f-9ce90d981517-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ebd7d11a-8905-495a-aa5f-9ce90d981517" (UID: "ebd7d11a-8905-495a-aa5f-9ce90d981517"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.487220 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebd7d11a-8905-495a-aa5f-9ce90d981517-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ebd7d11a-8905-495a-aa5f-9ce90d981517" (UID: "ebd7d11a-8905-495a-aa5f-9ce90d981517"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.489444 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebd7d11a-8905-495a-aa5f-9ce90d981517-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.489496 4789 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebd7d11a-8905-495a-aa5f-9ce90d981517-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.489511 4789 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebd7d11a-8905-495a-aa5f-9ce90d981517-logs\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.489542 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dp25v\" (UniqueName: \"kubernetes.io/projected/ebd7d11a-8905-495a-aa5f-9ce90d981517-kube-api-access-dp25v\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.489554 4789 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebd7d11a-8905-495a-aa5f-9ce90d981517-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.489565 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebd7d11a-8905-495a-aa5f-9ce90d981517-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.508788 4789 scope.go:117] "RemoveContainer" containerID="6d7f77ffdff2490d82eeeb3437bffa52238c166bbabf570ea7032585554d716d" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.552728 4789 scope.go:117] "RemoveContainer" containerID="4c123b41123db5dd34be50ec4b9d6d28699cc4ae9d87f8de1a875575af3885bb" Nov 24 11:49:36 crc kubenswrapper[4789]: E1124 11:49:36.553155 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c123b41123db5dd34be50ec4b9d6d28699cc4ae9d87f8de1a875575af3885bb\": container with ID starting with 4c123b41123db5dd34be50ec4b9d6d28699cc4ae9d87f8de1a875575af3885bb not found: ID does not exist" containerID="4c123b41123db5dd34be50ec4b9d6d28699cc4ae9d87f8de1a875575af3885bb" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.553247 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c123b41123db5dd34be50ec4b9d6d28699cc4ae9d87f8de1a875575af3885bb"} err="failed to get container status \"4c123b41123db5dd34be50ec4b9d6d28699cc4ae9d87f8de1a875575af3885bb\": rpc error: code = NotFound desc = could not find container \"4c123b41123db5dd34be50ec4b9d6d28699cc4ae9d87f8de1a875575af3885bb\": container with ID starting with 4c123b41123db5dd34be50ec4b9d6d28699cc4ae9d87f8de1a875575af3885bb not found: ID does not exist" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.553276 4789 scope.go:117] "RemoveContainer" containerID="6d7f77ffdff2490d82eeeb3437bffa52238c166bbabf570ea7032585554d716d" Nov 24 11:49:36 crc kubenswrapper[4789]: E1124 11:49:36.553639 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d7f77ffdff2490d82eeeb3437bffa52238c166bbabf570ea7032585554d716d\": container with ID starting with 6d7f77ffdff2490d82eeeb3437bffa52238c166bbabf570ea7032585554d716d not found: ID does not exist" containerID="6d7f77ffdff2490d82eeeb3437bffa52238c166bbabf570ea7032585554d716d" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.553705 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d7f77ffdff2490d82eeeb3437bffa52238c166bbabf570ea7032585554d716d"} err="failed to get container status \"6d7f77ffdff2490d82eeeb3437bffa52238c166bbabf570ea7032585554d716d\": rpc error: code = NotFound desc = could not find container \"6d7f77ffdff2490d82eeeb3437bffa52238c166bbabf570ea7032585554d716d\": container with ID starting with 6d7f77ffdff2490d82eeeb3437bffa52238c166bbabf570ea7032585554d716d not found: ID does not exist" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.758589 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.766257 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.784363 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 11:49:36 crc kubenswrapper[4789]: E1124 11:49:36.785034 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebd7d11a-8905-495a-aa5f-9ce90d981517" containerName="nova-api-log" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.785052 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebd7d11a-8905-495a-aa5f-9ce90d981517" containerName="nova-api-log" Nov 24 11:49:36 crc kubenswrapper[4789]: E1124 11:49:36.785092 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebd7d11a-8905-495a-aa5f-9ce90d981517" containerName="nova-api-api" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.785098 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebd7d11a-8905-495a-aa5f-9ce90d981517" containerName="nova-api-api" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.785271 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebd7d11a-8905-495a-aa5f-9ce90d981517" containerName="nova-api-log" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.785284 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebd7d11a-8905-495a-aa5f-9ce90d981517" containerName="nova-api-api" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.786216 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.789225 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.789440 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.789568 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.856522 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.897106 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2a9a39a-cd0e-49d0-a161-065526d89b49-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c2a9a39a-cd0e-49d0-a161-065526d89b49\") " pod="openstack/nova-api-0" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.897168 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2a9a39a-cd0e-49d0-a161-065526d89b49-logs\") pod \"nova-api-0\" (UID: \"c2a9a39a-cd0e-49d0-a161-065526d89b49\") " pod="openstack/nova-api-0" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.897203 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2a9a39a-cd0e-49d0-a161-065526d89b49-config-data\") pod \"nova-api-0\" (UID: \"c2a9a39a-cd0e-49d0-a161-065526d89b49\") " pod="openstack/nova-api-0" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.897246 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2a9a39a-cd0e-49d0-a161-065526d89b49-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c2a9a39a-cd0e-49d0-a161-065526d89b49\") " pod="openstack/nova-api-0" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.897266 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpphg\" (UniqueName: \"kubernetes.io/projected/c2a9a39a-cd0e-49d0-a161-065526d89b49-kube-api-access-bpphg\") pod \"nova-api-0\" (UID: \"c2a9a39a-cd0e-49d0-a161-065526d89b49\") " pod="openstack/nova-api-0" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.897287 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2a9a39a-cd0e-49d0-a161-065526d89b49-public-tls-certs\") pod \"nova-api-0\" (UID: \"c2a9a39a-cd0e-49d0-a161-065526d89b49\") " pod="openstack/nova-api-0" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.998943 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2a9a39a-cd0e-49d0-a161-065526d89b49-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c2a9a39a-cd0e-49d0-a161-065526d89b49\") " pod="openstack/nova-api-0" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.999019 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2a9a39a-cd0e-49d0-a161-065526d89b49-logs\") pod \"nova-api-0\" (UID: \"c2a9a39a-cd0e-49d0-a161-065526d89b49\") " pod="openstack/nova-api-0" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.999054 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2a9a39a-cd0e-49d0-a161-065526d89b49-config-data\") pod \"nova-api-0\" (UID: \"c2a9a39a-cd0e-49d0-a161-065526d89b49\") " pod="openstack/nova-api-0" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.999099 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2a9a39a-cd0e-49d0-a161-065526d89b49-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c2a9a39a-cd0e-49d0-a161-065526d89b49\") " pod="openstack/nova-api-0" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.999122 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpphg\" (UniqueName: \"kubernetes.io/projected/c2a9a39a-cd0e-49d0-a161-065526d89b49-kube-api-access-bpphg\") pod \"nova-api-0\" (UID: \"c2a9a39a-cd0e-49d0-a161-065526d89b49\") " pod="openstack/nova-api-0" Nov 24 11:49:36 crc kubenswrapper[4789]: I1124 11:49:36.999143 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2a9a39a-cd0e-49d0-a161-065526d89b49-public-tls-certs\") pod \"nova-api-0\" (UID: \"c2a9a39a-cd0e-49d0-a161-065526d89b49\") " pod="openstack/nova-api-0" Nov 24 11:49:37 crc kubenswrapper[4789]: I1124 11:49:37.000931 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2a9a39a-cd0e-49d0-a161-065526d89b49-logs\") pod \"nova-api-0\" (UID: \"c2a9a39a-cd0e-49d0-a161-065526d89b49\") " pod="openstack/nova-api-0" Nov 24 11:49:37 crc kubenswrapper[4789]: I1124 11:49:37.003036 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2a9a39a-cd0e-49d0-a161-065526d89b49-public-tls-certs\") pod \"nova-api-0\" (UID: \"c2a9a39a-cd0e-49d0-a161-065526d89b49\") " pod="openstack/nova-api-0" Nov 24 11:49:37 crc kubenswrapper[4789]: I1124 11:49:37.006050 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2a9a39a-cd0e-49d0-a161-065526d89b49-config-data\") pod \"nova-api-0\" (UID: \"c2a9a39a-cd0e-49d0-a161-065526d89b49\") " pod="openstack/nova-api-0" Nov 24 11:49:37 crc kubenswrapper[4789]: I1124 11:49:37.010031 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2a9a39a-cd0e-49d0-a161-065526d89b49-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c2a9a39a-cd0e-49d0-a161-065526d89b49\") " pod="openstack/nova-api-0" Nov 24 11:49:37 crc kubenswrapper[4789]: I1124 11:49:37.010216 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2a9a39a-cd0e-49d0-a161-065526d89b49-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c2a9a39a-cd0e-49d0-a161-065526d89b49\") " pod="openstack/nova-api-0" Nov 24 11:49:37 crc kubenswrapper[4789]: I1124 11:49:37.019430 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpphg\" (UniqueName: \"kubernetes.io/projected/c2a9a39a-cd0e-49d0-a161-065526d89b49-kube-api-access-bpphg\") pod \"nova-api-0\" (UID: \"c2a9a39a-cd0e-49d0-a161-065526d89b49\") " pod="openstack/nova-api-0" Nov 24 11:49:37 crc kubenswrapper[4789]: I1124 11:49:37.118373 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 11:49:37 crc kubenswrapper[4789]: I1124 11:49:37.430780 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0ca2367e-056b-4136-98ec-d53805416c09","Type":"ContainerStarted","Data":"5fa26d01bf07848594ed5c094d31f6c0e6b2859052809844f2e7918795b0cca4"} Nov 24 11:49:37 crc kubenswrapper[4789]: I1124 11:49:37.431153 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0ca2367e-056b-4136-98ec-d53805416c09","Type":"ContainerStarted","Data":"c625e803020c7d14394bbf65ab1a234e03258daa694bafbd2059ed3f16005e09"} Nov 24 11:49:37 crc kubenswrapper[4789]: I1124 11:49:37.431169 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0ca2367e-056b-4136-98ec-d53805416c09","Type":"ContainerStarted","Data":"03998e9910f40853d80f3197c104d68c4d133d0adf6d96e409c8f3daa3b66f4c"} Nov 24 11:49:37 crc kubenswrapper[4789]: I1124 11:49:37.462495 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.462473859 podStartE2EDuration="2.462473859s" podCreationTimestamp="2025-11-24 11:49:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:49:37.460011549 +0000 UTC m=+1160.042482918" watchObservedRunningTime="2025-11-24 11:49:37.462473859 +0000 UTC m=+1160.044945228" Nov 24 11:49:37 crc kubenswrapper[4789]: I1124 11:49:37.556153 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:49:38 crc kubenswrapper[4789]: I1124 11:49:38.190434 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebd7d11a-8905-495a-aa5f-9ce90d981517" path="/var/lib/kubelet/pods/ebd7d11a-8905-495a-aa5f-9ce90d981517/volumes" Nov 24 11:49:38 crc kubenswrapper[4789]: I1124 11:49:38.442391 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c2a9a39a-cd0e-49d0-a161-065526d89b49","Type":"ContainerStarted","Data":"44b90ea29aa27548b1d0d53b7811b00361ddd4ac4521209ee8d034e0c49c1f01"} Nov 24 11:49:38 crc kubenswrapper[4789]: I1124 11:49:38.442479 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c2a9a39a-cd0e-49d0-a161-065526d89b49","Type":"ContainerStarted","Data":"45a318e65b5374f4cdae2b07d5a5067ce4a641e5bfeee948e49a164b833e2444"} Nov 24 11:49:38 crc kubenswrapper[4789]: I1124 11:49:38.442495 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c2a9a39a-cd0e-49d0-a161-065526d89b49","Type":"ContainerStarted","Data":"f6f171d3c5f4af44890b2e1deed80585d913dbefd57fef89021a469db8578ff0"} Nov 24 11:49:38 crc kubenswrapper[4789]: I1124 11:49:38.481557 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.48144708 podStartE2EDuration="2.48144708s" podCreationTimestamp="2025-11-24 11:49:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:49:38.477068234 +0000 UTC m=+1161.059539633" watchObservedRunningTime="2025-11-24 11:49:38.48144708 +0000 UTC m=+1161.063918459" Nov 24 11:49:40 crc kubenswrapper[4789]: I1124 11:49:40.193053 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 11:49:40 crc kubenswrapper[4789]: I1124 11:49:40.858845 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 11:49:40 crc kubenswrapper[4789]: I1124 11:49:40.858910 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 11:49:45 crc kubenswrapper[4789]: I1124 11:49:45.191780 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 11:49:45 crc kubenswrapper[4789]: I1124 11:49:45.222047 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 11:49:45 crc kubenswrapper[4789]: I1124 11:49:45.548314 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 11:49:45 crc kubenswrapper[4789]: I1124 11:49:45.858580 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 11:49:45 crc kubenswrapper[4789]: I1124 11:49:45.858625 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 11:49:46 crc kubenswrapper[4789]: I1124 11:49:46.788810 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 24 11:49:46 crc kubenswrapper[4789]: I1124 11:49:46.900319 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0ca2367e-056b-4136-98ec-d53805416c09" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.187:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 11:49:46 crc kubenswrapper[4789]: I1124 11:49:46.900376 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0ca2367e-056b-4136-98ec-d53805416c09" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.187:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 11:49:47 crc kubenswrapper[4789]: I1124 11:49:47.120252 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 11:49:47 crc kubenswrapper[4789]: I1124 11:49:47.120302 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 11:49:48 crc kubenswrapper[4789]: I1124 11:49:48.136796 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c2a9a39a-cd0e-49d0-a161-065526d89b49" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.188:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 11:49:48 crc kubenswrapper[4789]: I1124 11:49:48.137525 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c2a9a39a-cd0e-49d0-a161-065526d89b49" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.188:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 11:49:55 crc kubenswrapper[4789]: I1124 11:49:55.866021 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 11:49:55 crc kubenswrapper[4789]: I1124 11:49:55.869657 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 11:49:55 crc kubenswrapper[4789]: I1124 11:49:55.876181 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 11:49:56 crc kubenswrapper[4789]: I1124 11:49:56.631208 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 11:49:57 crc kubenswrapper[4789]: I1124 11:49:57.131970 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 11:49:57 crc kubenswrapper[4789]: I1124 11:49:57.132965 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 11:49:57 crc kubenswrapper[4789]: I1124 11:49:57.141075 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 11:49:57 crc kubenswrapper[4789]: I1124 11:49:57.141964 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 11:49:57 crc kubenswrapper[4789]: I1124 11:49:57.636150 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 11:49:57 crc kubenswrapper[4789]: I1124 11:49:57.642392 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 11:50:05 crc kubenswrapper[4789]: I1124 11:50:05.480317 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 11:50:07 crc kubenswrapper[4789]: I1124 11:50:07.167060 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 11:50:10 crc kubenswrapper[4789]: I1124 11:50:10.174041 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e" containerName="rabbitmq" containerID="cri-o://189900dc95c48e8a3e902afa5bfccbfac9e8012793dfb430113a563c463e6eb9" gracePeriod=604796 Nov 24 11:50:12 crc kubenswrapper[4789]: I1124 11:50:12.299618 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="ad2c0f97-8696-425d-bd5a-42a24bee8297" containerName="rabbitmq" containerID="cri-o://7021cc39c31aa6c4138f62bc54f62a8a1a86cc310c60d75d51202b5fe449c5b8" gracePeriod=604795 Nov 24 11:50:16 crc kubenswrapper[4789]: I1124 11:50:16.795446 4789 generic.go:334] "Generic (PLEG): container finished" podID="4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e" containerID="189900dc95c48e8a3e902afa5bfccbfac9e8012793dfb430113a563c463e6eb9" exitCode=0 Nov 24 11:50:16 crc kubenswrapper[4789]: I1124 11:50:16.795577 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e","Type":"ContainerDied","Data":"189900dc95c48e8a3e902afa5bfccbfac9e8012793dfb430113a563c463e6eb9"} Nov 24 11:50:16 crc kubenswrapper[4789]: I1124 11:50:16.795905 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e","Type":"ContainerDied","Data":"09ac90e8d2dc8174a64b28a962173151214ecc828c9103ef208179ca108e1bc3"} Nov 24 11:50:16 crc kubenswrapper[4789]: I1124 11:50:16.795944 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09ac90e8d2dc8174a64b28a962173151214ecc828c9103ef208179ca108e1bc3" Nov 24 11:50:16 crc kubenswrapper[4789]: I1124 11:50:16.828238 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 11:50:16 crc kubenswrapper[4789]: I1124 11:50:16.961874 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-erlang-cookie-secret\") pod \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " Nov 24 11:50:16 crc kubenswrapper[4789]: I1124 11:50:16.961968 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-rabbitmq-erlang-cookie\") pod \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " Nov 24 11:50:16 crc kubenswrapper[4789]: I1124 11:50:16.962012 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v46pb\" (UniqueName: \"kubernetes.io/projected/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-kube-api-access-v46pb\") pod \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " Nov 24 11:50:16 crc kubenswrapper[4789]: I1124 11:50:16.962064 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-rabbitmq-tls\") pod \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " Nov 24 11:50:16 crc kubenswrapper[4789]: I1124 11:50:16.962157 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-server-conf\") pod \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " Nov 24 11:50:16 crc kubenswrapper[4789]: I1124 11:50:16.962190 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-config-data\") pod \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " Nov 24 11:50:16 crc kubenswrapper[4789]: I1124 11:50:16.962219 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " Nov 24 11:50:16 crc kubenswrapper[4789]: I1124 11:50:16.962260 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-rabbitmq-plugins\") pod \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " Nov 24 11:50:16 crc kubenswrapper[4789]: I1124 11:50:16.962307 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-rabbitmq-confd\") pod \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " Nov 24 11:50:16 crc kubenswrapper[4789]: I1124 11:50:16.962361 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-pod-info\") pod \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " Nov 24 11:50:16 crc kubenswrapper[4789]: I1124 11:50:16.962388 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-plugins-conf\") pod \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " Nov 24 11:50:16 crc kubenswrapper[4789]: I1124 11:50:16.963374 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e" (UID: "4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:50:16 crc kubenswrapper[4789]: I1124 11:50:16.963499 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e" (UID: "4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:50:16 crc kubenswrapper[4789]: I1124 11:50:16.977487 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-pod-info" (OuterVolumeSpecName: "pod-info") pod "4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e" (UID: "4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 24 11:50:16 crc kubenswrapper[4789]: I1124 11:50:16.978101 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-kube-api-access-v46pb" (OuterVolumeSpecName: "kube-api-access-v46pb") pod "4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e" (UID: "4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e"). InnerVolumeSpecName "kube-api-access-v46pb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:50:16 crc kubenswrapper[4789]: I1124 11:50:16.978128 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e" (UID: "4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:50:16 crc kubenswrapper[4789]: I1124 11:50:16.993378 4789 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:16 crc kubenswrapper[4789]: I1124 11:50:16.993415 4789 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:16 crc kubenswrapper[4789]: I1124 11:50:16.993429 4789 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-pod-info\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:16 crc kubenswrapper[4789]: I1124 11:50:16.995683 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e" (UID: "4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 11:50:17 crc kubenswrapper[4789]: I1124 11:50:17.004825 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e" (UID: "4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:50:17 crc kubenswrapper[4789]: I1124 11:50:17.014788 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e" (UID: "4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:50:17 crc kubenswrapper[4789]: I1124 11:50:17.015001 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-config-data" (OuterVolumeSpecName: "config-data") pod "4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e" (UID: "4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:50:17 crc kubenswrapper[4789]: I1124 11:50:17.034539 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-server-conf" (OuterVolumeSpecName: "server-conf") pod "4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e" (UID: "4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:50:17 crc kubenswrapper[4789]: I1124 11:50:17.093959 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e" (UID: "4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:50:17 crc kubenswrapper[4789]: I1124 11:50:17.094082 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-rabbitmq-confd\") pod \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\" (UID: \"4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e\") " Nov 24 11:50:17 crc kubenswrapper[4789]: I1124 11:50:17.094652 4789 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Nov 24 11:50:17 crc kubenswrapper[4789]: I1124 11:50:17.094671 4789 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:17 crc kubenswrapper[4789]: I1124 11:50:17.094682 4789 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:17 crc kubenswrapper[4789]: I1124 11:50:17.094691 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v46pb\" (UniqueName: \"kubernetes.io/projected/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-kube-api-access-v46pb\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:17 crc kubenswrapper[4789]: I1124 11:50:17.094699 4789 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:17 crc kubenswrapper[4789]: I1124 11:50:17.094707 4789 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-server-conf\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:17 crc kubenswrapper[4789]: I1124 11:50:17.094716 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:17 crc kubenswrapper[4789]: W1124 11:50:17.094747 4789 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e/volumes/kubernetes.io~projected/rabbitmq-confd Nov 24 11:50:17 crc kubenswrapper[4789]: I1124 11:50:17.094765 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e" (UID: "4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:50:17 crc kubenswrapper[4789]: I1124 11:50:17.116849 4789 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Nov 24 11:50:17 crc kubenswrapper[4789]: I1124 11:50:17.196510 4789 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:17 crc kubenswrapper[4789]: I1124 11:50:17.196803 4789 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:17 crc kubenswrapper[4789]: I1124 11:50:17.804065 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 11:50:17 crc kubenswrapper[4789]: I1124 11:50:17.839755 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 11:50:17 crc kubenswrapper[4789]: I1124 11:50:17.847383 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 11:50:17 crc kubenswrapper[4789]: I1124 11:50:17.860710 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 11:50:17 crc kubenswrapper[4789]: E1124 11:50:17.861036 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e" containerName="rabbitmq" Nov 24 11:50:17 crc kubenswrapper[4789]: I1124 11:50:17.861055 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e" containerName="rabbitmq" Nov 24 11:50:17 crc kubenswrapper[4789]: E1124 11:50:17.861071 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e" containerName="setup-container" Nov 24 11:50:17 crc kubenswrapper[4789]: I1124 11:50:17.861077 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e" containerName="setup-container" Nov 24 11:50:17 crc kubenswrapper[4789]: I1124 11:50:17.861246 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e" containerName="rabbitmq" Nov 24 11:50:17 crc kubenswrapper[4789]: I1124 11:50:17.862120 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 11:50:17 crc kubenswrapper[4789]: I1124 11:50:17.866580 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 24 11:50:17 crc kubenswrapper[4789]: I1124 11:50:17.866610 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 24 11:50:17 crc kubenswrapper[4789]: I1124 11:50:17.866688 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 24 11:50:17 crc kubenswrapper[4789]: I1124 11:50:17.866597 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 24 11:50:17 crc kubenswrapper[4789]: I1124 11:50:17.871811 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-vvrch" Nov 24 11:50:17 crc kubenswrapper[4789]: I1124 11:50:17.872013 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 24 11:50:17 crc kubenswrapper[4789]: I1124 11:50:17.881217 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 24 11:50:17 crc kubenswrapper[4789]: I1124 11:50:17.898265 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.009113 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsrj8\" (UniqueName: \"kubernetes.io/projected/61dd768a-2e14-4e8f-89da-0feeb90b9796-kube-api-access-vsrj8\") pod \"rabbitmq-server-0\" (UID: \"61dd768a-2e14-4e8f-89da-0feeb90b9796\") " pod="openstack/rabbitmq-server-0" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.009161 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61dd768a-2e14-4e8f-89da-0feeb90b9796-config-data\") pod \"rabbitmq-server-0\" (UID: \"61dd768a-2e14-4e8f-89da-0feeb90b9796\") " pod="openstack/rabbitmq-server-0" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.009181 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/61dd768a-2e14-4e8f-89da-0feeb90b9796-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"61dd768a-2e14-4e8f-89da-0feeb90b9796\") " pod="openstack/rabbitmq-server-0" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.009202 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/61dd768a-2e14-4e8f-89da-0feeb90b9796-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"61dd768a-2e14-4e8f-89da-0feeb90b9796\") " pod="openstack/rabbitmq-server-0" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.009227 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"61dd768a-2e14-4e8f-89da-0feeb90b9796\") " pod="openstack/rabbitmq-server-0" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.009244 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/61dd768a-2e14-4e8f-89da-0feeb90b9796-server-conf\") pod \"rabbitmq-server-0\" (UID: \"61dd768a-2e14-4e8f-89da-0feeb90b9796\") " pod="openstack/rabbitmq-server-0" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.009277 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/61dd768a-2e14-4e8f-89da-0feeb90b9796-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"61dd768a-2e14-4e8f-89da-0feeb90b9796\") " pod="openstack/rabbitmq-server-0" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.009335 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/61dd768a-2e14-4e8f-89da-0feeb90b9796-pod-info\") pod \"rabbitmq-server-0\" (UID: \"61dd768a-2e14-4e8f-89da-0feeb90b9796\") " pod="openstack/rabbitmq-server-0" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.009361 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/61dd768a-2e14-4e8f-89da-0feeb90b9796-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"61dd768a-2e14-4e8f-89da-0feeb90b9796\") " pod="openstack/rabbitmq-server-0" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.009400 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/61dd768a-2e14-4e8f-89da-0feeb90b9796-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"61dd768a-2e14-4e8f-89da-0feeb90b9796\") " pod="openstack/rabbitmq-server-0" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.009416 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/61dd768a-2e14-4e8f-89da-0feeb90b9796-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"61dd768a-2e14-4e8f-89da-0feeb90b9796\") " pod="openstack/rabbitmq-server-0" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.110979 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/61dd768a-2e14-4e8f-89da-0feeb90b9796-pod-info\") pod \"rabbitmq-server-0\" (UID: \"61dd768a-2e14-4e8f-89da-0feeb90b9796\") " pod="openstack/rabbitmq-server-0" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.111026 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/61dd768a-2e14-4e8f-89da-0feeb90b9796-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"61dd768a-2e14-4e8f-89da-0feeb90b9796\") " pod="openstack/rabbitmq-server-0" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.111070 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/61dd768a-2e14-4e8f-89da-0feeb90b9796-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"61dd768a-2e14-4e8f-89da-0feeb90b9796\") " pod="openstack/rabbitmq-server-0" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.111088 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/61dd768a-2e14-4e8f-89da-0feeb90b9796-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"61dd768a-2e14-4e8f-89da-0feeb90b9796\") " pod="openstack/rabbitmq-server-0" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.111147 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsrj8\" (UniqueName: \"kubernetes.io/projected/61dd768a-2e14-4e8f-89da-0feeb90b9796-kube-api-access-vsrj8\") pod \"rabbitmq-server-0\" (UID: \"61dd768a-2e14-4e8f-89da-0feeb90b9796\") " pod="openstack/rabbitmq-server-0" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.111165 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61dd768a-2e14-4e8f-89da-0feeb90b9796-config-data\") pod \"rabbitmq-server-0\" (UID: \"61dd768a-2e14-4e8f-89da-0feeb90b9796\") " pod="openstack/rabbitmq-server-0" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.111190 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/61dd768a-2e14-4e8f-89da-0feeb90b9796-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"61dd768a-2e14-4e8f-89da-0feeb90b9796\") " pod="openstack/rabbitmq-server-0" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.111213 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/61dd768a-2e14-4e8f-89da-0feeb90b9796-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"61dd768a-2e14-4e8f-89da-0feeb90b9796\") " pod="openstack/rabbitmq-server-0" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.111242 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/61dd768a-2e14-4e8f-89da-0feeb90b9796-server-conf\") pod \"rabbitmq-server-0\" (UID: \"61dd768a-2e14-4e8f-89da-0feeb90b9796\") " pod="openstack/rabbitmq-server-0" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.111267 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"61dd768a-2e14-4e8f-89da-0feeb90b9796\") " pod="openstack/rabbitmq-server-0" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.111302 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/61dd768a-2e14-4e8f-89da-0feeb90b9796-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"61dd768a-2e14-4e8f-89da-0feeb90b9796\") " pod="openstack/rabbitmq-server-0" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.111526 4789 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"61dd768a-2e14-4e8f-89da-0feeb90b9796\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-server-0" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.111612 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/61dd768a-2e14-4e8f-89da-0feeb90b9796-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"61dd768a-2e14-4e8f-89da-0feeb90b9796\") " pod="openstack/rabbitmq-server-0" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.111622 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/61dd768a-2e14-4e8f-89da-0feeb90b9796-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"61dd768a-2e14-4e8f-89da-0feeb90b9796\") " pod="openstack/rabbitmq-server-0" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.112906 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.113286 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.113544 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.114939 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.123237 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.124013 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/61dd768a-2e14-4e8f-89da-0feeb90b9796-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"61dd768a-2e14-4e8f-89da-0feeb90b9796\") " pod="openstack/rabbitmq-server-0" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.125104 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/61dd768a-2e14-4e8f-89da-0feeb90b9796-pod-info\") pod \"rabbitmq-server-0\" (UID: \"61dd768a-2e14-4e8f-89da-0feeb90b9796\") " pod="openstack/rabbitmq-server-0" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.125425 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61dd768a-2e14-4e8f-89da-0feeb90b9796-config-data\") pod \"rabbitmq-server-0\" (UID: \"61dd768a-2e14-4e8f-89da-0feeb90b9796\") " pod="openstack/rabbitmq-server-0" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.130362 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/61dd768a-2e14-4e8f-89da-0feeb90b9796-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"61dd768a-2e14-4e8f-89da-0feeb90b9796\") " pod="openstack/rabbitmq-server-0" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.130905 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/61dd768a-2e14-4e8f-89da-0feeb90b9796-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"61dd768a-2e14-4e8f-89da-0feeb90b9796\") " pod="openstack/rabbitmq-server-0" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.132385 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/61dd768a-2e14-4e8f-89da-0feeb90b9796-server-conf\") pod \"rabbitmq-server-0\" (UID: \"61dd768a-2e14-4e8f-89da-0feeb90b9796\") " pod="openstack/rabbitmq-server-0" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.133186 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.133428 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsrj8\" (UniqueName: \"kubernetes.io/projected/61dd768a-2e14-4e8f-89da-0feeb90b9796-kube-api-access-vsrj8\") pod \"rabbitmq-server-0\" (UID: \"61dd768a-2e14-4e8f-89da-0feeb90b9796\") " pod="openstack/rabbitmq-server-0" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.139064 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"61dd768a-2e14-4e8f-89da-0feeb90b9796\") " pod="openstack/rabbitmq-server-0" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.145340 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/61dd768a-2e14-4e8f-89da-0feeb90b9796-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"61dd768a-2e14-4e8f-89da-0feeb90b9796\") " pod="openstack/rabbitmq-server-0" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.187846 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-vvrch" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.194479 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.195258 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e" path="/var/lib/kubelet/pods/4bb4dca5-cf55-49e6-ab7c-2edb8f8a981e/volumes" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.355781 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="ad2c0f97-8696-425d-bd5a-42a24bee8297" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.656196 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 11:50:18 crc kubenswrapper[4789]: W1124 11:50:18.665475 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61dd768a_2e14_4e8f_89da_0feeb90b9796.slice/crio-24a290b8dd1d08e4712d1c4e2cde0c92aff6b32678143f17515d6414d1fb870e WatchSource:0}: Error finding container 24a290b8dd1d08e4712d1c4e2cde0c92aff6b32678143f17515d6414d1fb870e: Status 404 returned error can't find the container with id 24a290b8dd1d08e4712d1c4e2cde0c92aff6b32678143f17515d6414d1fb870e Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.812325 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"61dd768a-2e14-4e8f-89da-0feeb90b9796","Type":"ContainerStarted","Data":"24a290b8dd1d08e4712d1c4e2cde0c92aff6b32678143f17515d6414d1fb870e"} Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.815140 4789 generic.go:334] "Generic (PLEG): container finished" podID="ad2c0f97-8696-425d-bd5a-42a24bee8297" containerID="7021cc39c31aa6c4138f62bc54f62a8a1a86cc310c60d75d51202b5fe449c5b8" exitCode=0 Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.815170 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ad2c0f97-8696-425d-bd5a-42a24bee8297","Type":"ContainerDied","Data":"7021cc39c31aa6c4138f62bc54f62a8a1a86cc310c60d75d51202b5fe449c5b8"} Nov 24 11:50:18 crc kubenswrapper[4789]: I1124 11:50:18.870798 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.031258 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ad2c0f97-8696-425d-bd5a-42a24bee8297-plugins-conf\") pod \"ad2c0f97-8696-425d-bd5a-42a24bee8297\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.031359 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ad2c0f97-8696-425d-bd5a-42a24bee8297-erlang-cookie-secret\") pod \"ad2c0f97-8696-425d-bd5a-42a24bee8297\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.031404 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ad2c0f97-8696-425d-bd5a-42a24bee8297-rabbitmq-erlang-cookie\") pod \"ad2c0f97-8696-425d-bd5a-42a24bee8297\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.031445 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ad2c0f97-8696-425d-bd5a-42a24bee8297-pod-info\") pod \"ad2c0f97-8696-425d-bd5a-42a24bee8297\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.031492 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ad2c0f97-8696-425d-bd5a-42a24bee8297-rabbitmq-plugins\") pod \"ad2c0f97-8696-425d-bd5a-42a24bee8297\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.031550 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ad2c0f97-8696-425d-bd5a-42a24bee8297-config-data\") pod \"ad2c0f97-8696-425d-bd5a-42a24bee8297\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.031571 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ad2c0f97-8696-425d-bd5a-42a24bee8297\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.031607 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ad2c0f97-8696-425d-bd5a-42a24bee8297-rabbitmq-tls\") pod \"ad2c0f97-8696-425d-bd5a-42a24bee8297\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.031638 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ad2c0f97-8696-425d-bd5a-42a24bee8297-rabbitmq-confd\") pod \"ad2c0f97-8696-425d-bd5a-42a24bee8297\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.031669 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ad2c0f97-8696-425d-bd5a-42a24bee8297-server-conf\") pod \"ad2c0f97-8696-425d-bd5a-42a24bee8297\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.031691 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n749d\" (UniqueName: \"kubernetes.io/projected/ad2c0f97-8696-425d-bd5a-42a24bee8297-kube-api-access-n749d\") pod \"ad2c0f97-8696-425d-bd5a-42a24bee8297\" (UID: \"ad2c0f97-8696-425d-bd5a-42a24bee8297\") " Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.032109 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad2c0f97-8696-425d-bd5a-42a24bee8297-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "ad2c0f97-8696-425d-bd5a-42a24bee8297" (UID: "ad2c0f97-8696-425d-bd5a-42a24bee8297"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.032422 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad2c0f97-8696-425d-bd5a-42a24bee8297-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "ad2c0f97-8696-425d-bd5a-42a24bee8297" (UID: "ad2c0f97-8696-425d-bd5a-42a24bee8297"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.032439 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad2c0f97-8696-425d-bd5a-42a24bee8297-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "ad2c0f97-8696-425d-bd5a-42a24bee8297" (UID: "ad2c0f97-8696-425d-bd5a-42a24bee8297"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.037745 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "ad2c0f97-8696-425d-bd5a-42a24bee8297" (UID: "ad2c0f97-8696-425d-bd5a-42a24bee8297"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.037929 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/ad2c0f97-8696-425d-bd5a-42a24bee8297-pod-info" (OuterVolumeSpecName: "pod-info") pod "ad2c0f97-8696-425d-bd5a-42a24bee8297" (UID: "ad2c0f97-8696-425d-bd5a-42a24bee8297"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.039077 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad2c0f97-8696-425d-bd5a-42a24bee8297-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "ad2c0f97-8696-425d-bd5a-42a24bee8297" (UID: "ad2c0f97-8696-425d-bd5a-42a24bee8297"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.039113 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad2c0f97-8696-425d-bd5a-42a24bee8297-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "ad2c0f97-8696-425d-bd5a-42a24bee8297" (UID: "ad2c0f97-8696-425d-bd5a-42a24bee8297"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.039146 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad2c0f97-8696-425d-bd5a-42a24bee8297-kube-api-access-n749d" (OuterVolumeSpecName: "kube-api-access-n749d") pod "ad2c0f97-8696-425d-bd5a-42a24bee8297" (UID: "ad2c0f97-8696-425d-bd5a-42a24bee8297"). InnerVolumeSpecName "kube-api-access-n749d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.091239 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad2c0f97-8696-425d-bd5a-42a24bee8297-config-data" (OuterVolumeSpecName: "config-data") pod "ad2c0f97-8696-425d-bd5a-42a24bee8297" (UID: "ad2c0f97-8696-425d-bd5a-42a24bee8297"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.094532 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad2c0f97-8696-425d-bd5a-42a24bee8297-server-conf" (OuterVolumeSpecName: "server-conf") pod "ad2c0f97-8696-425d-bd5a-42a24bee8297" (UID: "ad2c0f97-8696-425d-bd5a-42a24bee8297"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.133876 4789 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ad2c0f97-8696-425d-bd5a-42a24bee8297-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.133906 4789 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ad2c0f97-8696-425d-bd5a-42a24bee8297-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.133920 4789 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ad2c0f97-8696-425d-bd5a-42a24bee8297-pod-info\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.133930 4789 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ad2c0f97-8696-425d-bd5a-42a24bee8297-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.133938 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ad2c0f97-8696-425d-bd5a-42a24bee8297-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.133964 4789 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.133973 4789 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ad2c0f97-8696-425d-bd5a-42a24bee8297-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.133981 4789 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ad2c0f97-8696-425d-bd5a-42a24bee8297-server-conf\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.133989 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n749d\" (UniqueName: \"kubernetes.io/projected/ad2c0f97-8696-425d-bd5a-42a24bee8297-kube-api-access-n749d\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.133997 4789 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ad2c0f97-8696-425d-bd5a-42a24bee8297-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.135701 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad2c0f97-8696-425d-bd5a-42a24bee8297-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "ad2c0f97-8696-425d-bd5a-42a24bee8297" (UID: "ad2c0f97-8696-425d-bd5a-42a24bee8297"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.151608 4789 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.236398 4789 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.236453 4789 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ad2c0f97-8696-425d-bd5a-42a24bee8297-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.832090 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ad2c0f97-8696-425d-bd5a-42a24bee8297","Type":"ContainerDied","Data":"cd9e980668f226cae8a221617ea2d9f60230ac680ef31ad8bb430d7191f0a444"} Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.832500 4789 scope.go:117] "RemoveContainer" containerID="7021cc39c31aa6c4138f62bc54f62a8a1a86cc310c60d75d51202b5fe449c5b8" Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.832164 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.861025 4789 scope.go:117] "RemoveContainer" containerID="a664d29c1069225aca624a58f7f6bad45e8a79e6507290fb266b0b826e03e680" Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.871655 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.890976 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.904100 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 11:50:19 crc kubenswrapper[4789]: E1124 11:50:19.904494 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad2c0f97-8696-425d-bd5a-42a24bee8297" containerName="setup-container" Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.904509 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad2c0f97-8696-425d-bd5a-42a24bee8297" containerName="setup-container" Nov 24 11:50:19 crc kubenswrapper[4789]: E1124 11:50:19.904524 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad2c0f97-8696-425d-bd5a-42a24bee8297" containerName="rabbitmq" Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.904530 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad2c0f97-8696-425d-bd5a-42a24bee8297" containerName="rabbitmq" Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.904707 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad2c0f97-8696-425d-bd5a-42a24bee8297" containerName="rabbitmq" Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.905590 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.910419 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.910623 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.910711 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.910877 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-h2b58" Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.910927 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.910884 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.911198 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 24 11:50:19 crc kubenswrapper[4789]: I1124 11:50:19.935132 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 11:50:20 crc kubenswrapper[4789]: I1124 11:50:20.055364 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1652b281-174f-466f-9b1b-52006fe58620-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1652b281-174f-466f-9b1b-52006fe58620\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:50:20 crc kubenswrapper[4789]: I1124 11:50:20.055431 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1652b281-174f-466f-9b1b-52006fe58620-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1652b281-174f-466f-9b1b-52006fe58620\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:50:20 crc kubenswrapper[4789]: I1124 11:50:20.055474 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1652b281-174f-466f-9b1b-52006fe58620-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1652b281-174f-466f-9b1b-52006fe58620\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:50:20 crc kubenswrapper[4789]: I1124 11:50:20.055497 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgncj\" (UniqueName: \"kubernetes.io/projected/1652b281-174f-466f-9b1b-52006fe58620-kube-api-access-cgncj\") pod \"rabbitmq-cell1-server-0\" (UID: \"1652b281-174f-466f-9b1b-52006fe58620\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:50:20 crc kubenswrapper[4789]: I1124 11:50:20.055520 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1652b281-174f-466f-9b1b-52006fe58620-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1652b281-174f-466f-9b1b-52006fe58620\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:50:20 crc kubenswrapper[4789]: I1124 11:50:20.055559 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1652b281-174f-466f-9b1b-52006fe58620-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1652b281-174f-466f-9b1b-52006fe58620\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:50:20 crc kubenswrapper[4789]: I1124 11:50:20.055614 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1652b281-174f-466f-9b1b-52006fe58620-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1652b281-174f-466f-9b1b-52006fe58620\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:50:20 crc kubenswrapper[4789]: I1124 11:50:20.055712 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1652b281-174f-466f-9b1b-52006fe58620-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1652b281-174f-466f-9b1b-52006fe58620\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:50:20 crc kubenswrapper[4789]: I1124 11:50:20.055742 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1652b281-174f-466f-9b1b-52006fe58620\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:50:20 crc kubenswrapper[4789]: I1124 11:50:20.055766 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1652b281-174f-466f-9b1b-52006fe58620-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1652b281-174f-466f-9b1b-52006fe58620\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:50:20 crc kubenswrapper[4789]: I1124 11:50:20.055783 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1652b281-174f-466f-9b1b-52006fe58620-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1652b281-174f-466f-9b1b-52006fe58620\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:50:20 crc kubenswrapper[4789]: I1124 11:50:20.157525 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1652b281-174f-466f-9b1b-52006fe58620-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1652b281-174f-466f-9b1b-52006fe58620\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:50:20 crc kubenswrapper[4789]: I1124 11:50:20.157776 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1652b281-174f-466f-9b1b-52006fe58620\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:50:20 crc kubenswrapper[4789]: I1124 11:50:20.157803 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1652b281-174f-466f-9b1b-52006fe58620-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1652b281-174f-466f-9b1b-52006fe58620\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:50:20 crc kubenswrapper[4789]: I1124 11:50:20.157819 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1652b281-174f-466f-9b1b-52006fe58620-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1652b281-174f-466f-9b1b-52006fe58620\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:50:20 crc kubenswrapper[4789]: I1124 11:50:20.157913 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1652b281-174f-466f-9b1b-52006fe58620-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1652b281-174f-466f-9b1b-52006fe58620\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:50:20 crc kubenswrapper[4789]: I1124 11:50:20.157931 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1652b281-174f-466f-9b1b-52006fe58620-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1652b281-174f-466f-9b1b-52006fe58620\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:50:20 crc kubenswrapper[4789]: I1124 11:50:20.157953 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1652b281-174f-466f-9b1b-52006fe58620-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1652b281-174f-466f-9b1b-52006fe58620\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:50:20 crc kubenswrapper[4789]: I1124 11:50:20.157971 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgncj\" (UniqueName: \"kubernetes.io/projected/1652b281-174f-466f-9b1b-52006fe58620-kube-api-access-cgncj\") pod \"rabbitmq-cell1-server-0\" (UID: \"1652b281-174f-466f-9b1b-52006fe58620\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:50:20 crc kubenswrapper[4789]: I1124 11:50:20.157990 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1652b281-174f-466f-9b1b-52006fe58620-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1652b281-174f-466f-9b1b-52006fe58620\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:50:20 crc kubenswrapper[4789]: I1124 11:50:20.158033 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1652b281-174f-466f-9b1b-52006fe58620-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1652b281-174f-466f-9b1b-52006fe58620\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:50:20 crc kubenswrapper[4789]: I1124 11:50:20.158053 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1652b281-174f-466f-9b1b-52006fe58620-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1652b281-174f-466f-9b1b-52006fe58620\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:50:20 crc kubenswrapper[4789]: I1124 11:50:20.159440 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1652b281-174f-466f-9b1b-52006fe58620-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1652b281-174f-466f-9b1b-52006fe58620\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:50:20 crc kubenswrapper[4789]: I1124 11:50:20.159771 4789 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1652b281-174f-466f-9b1b-52006fe58620\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:50:20 crc kubenswrapper[4789]: I1124 11:50:20.159919 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1652b281-174f-466f-9b1b-52006fe58620-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1652b281-174f-466f-9b1b-52006fe58620\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:50:20 crc kubenswrapper[4789]: I1124 11:50:20.160358 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1652b281-174f-466f-9b1b-52006fe58620-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1652b281-174f-466f-9b1b-52006fe58620\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:50:20 crc kubenswrapper[4789]: I1124 11:50:20.159799 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1652b281-174f-466f-9b1b-52006fe58620-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1652b281-174f-466f-9b1b-52006fe58620\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:50:20 crc kubenswrapper[4789]: I1124 11:50:20.160677 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1652b281-174f-466f-9b1b-52006fe58620-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1652b281-174f-466f-9b1b-52006fe58620\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:50:20 crc kubenswrapper[4789]: I1124 11:50:20.165165 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1652b281-174f-466f-9b1b-52006fe58620-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1652b281-174f-466f-9b1b-52006fe58620\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:50:20 crc kubenswrapper[4789]: I1124 11:50:20.165474 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1652b281-174f-466f-9b1b-52006fe58620-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1652b281-174f-466f-9b1b-52006fe58620\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:50:20 crc kubenswrapper[4789]: I1124 11:50:20.166302 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1652b281-174f-466f-9b1b-52006fe58620-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1652b281-174f-466f-9b1b-52006fe58620\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:50:20 crc kubenswrapper[4789]: I1124 11:50:20.168286 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1652b281-174f-466f-9b1b-52006fe58620-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1652b281-174f-466f-9b1b-52006fe58620\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:50:20 crc kubenswrapper[4789]: I1124 11:50:20.179917 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgncj\" (UniqueName: \"kubernetes.io/projected/1652b281-174f-466f-9b1b-52006fe58620-kube-api-access-cgncj\") pod \"rabbitmq-cell1-server-0\" (UID: \"1652b281-174f-466f-9b1b-52006fe58620\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:50:20 crc kubenswrapper[4789]: I1124 11:50:20.182854 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad2c0f97-8696-425d-bd5a-42a24bee8297" path="/var/lib/kubelet/pods/ad2c0f97-8696-425d-bd5a-42a24bee8297/volumes" Nov 24 11:50:20 crc kubenswrapper[4789]: I1124 11:50:20.201406 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1652b281-174f-466f-9b1b-52006fe58620\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:50:20 crc kubenswrapper[4789]: I1124 11:50:20.224303 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:50:20 crc kubenswrapper[4789]: I1124 11:50:20.649873 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 11:50:20 crc kubenswrapper[4789]: I1124 11:50:20.842088 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"61dd768a-2e14-4e8f-89da-0feeb90b9796","Type":"ContainerStarted","Data":"1ed54a6efabd34e42c0ecb5fa2ff1942fe1b29c293fbd08ddaeb8d60379ee9ca"} Nov 24 11:50:20 crc kubenswrapper[4789]: I1124 11:50:20.843317 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1652b281-174f-466f-9b1b-52006fe58620","Type":"ContainerStarted","Data":"f03ae60d2d24aad64c7cda4caff94fa7db269e7465088750d17f49adde953096"} Nov 24 11:50:21 crc kubenswrapper[4789]: I1124 11:50:21.704607 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-gn4wz"] Nov 24 11:50:21 crc kubenswrapper[4789]: I1124 11:50:21.706087 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-gn4wz" Nov 24 11:50:21 crc kubenswrapper[4789]: I1124 11:50:21.721449 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 24 11:50:21 crc kubenswrapper[4789]: I1124 11:50:21.742242 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-gn4wz"] Nov 24 11:50:21 crc kubenswrapper[4789]: I1124 11:50:21.792477 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c08b3eb6-9305-4cce-a2a0-07074c43432c-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-gn4wz\" (UID: \"c08b3eb6-9305-4cce-a2a0-07074c43432c\") " pod="openstack/dnsmasq-dns-578b8d767c-gn4wz" Nov 24 11:50:21 crc kubenswrapper[4789]: I1124 11:50:21.792596 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb7qf\" (UniqueName: \"kubernetes.io/projected/c08b3eb6-9305-4cce-a2a0-07074c43432c-kube-api-access-sb7qf\") pod \"dnsmasq-dns-578b8d767c-gn4wz\" (UID: \"c08b3eb6-9305-4cce-a2a0-07074c43432c\") " pod="openstack/dnsmasq-dns-578b8d767c-gn4wz" Nov 24 11:50:21 crc kubenswrapper[4789]: I1124 11:50:21.792717 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c08b3eb6-9305-4cce-a2a0-07074c43432c-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-gn4wz\" (UID: \"c08b3eb6-9305-4cce-a2a0-07074c43432c\") " pod="openstack/dnsmasq-dns-578b8d767c-gn4wz" Nov 24 11:50:21 crc kubenswrapper[4789]: I1124 11:50:21.792996 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c08b3eb6-9305-4cce-a2a0-07074c43432c-config\") pod \"dnsmasq-dns-578b8d767c-gn4wz\" (UID: \"c08b3eb6-9305-4cce-a2a0-07074c43432c\") " pod="openstack/dnsmasq-dns-578b8d767c-gn4wz" Nov 24 11:50:21 crc kubenswrapper[4789]: I1124 11:50:21.793034 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c08b3eb6-9305-4cce-a2a0-07074c43432c-dns-svc\") pod \"dnsmasq-dns-578b8d767c-gn4wz\" (UID: \"c08b3eb6-9305-4cce-a2a0-07074c43432c\") " pod="openstack/dnsmasq-dns-578b8d767c-gn4wz" Nov 24 11:50:21 crc kubenswrapper[4789]: I1124 11:50:21.793105 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c08b3eb6-9305-4cce-a2a0-07074c43432c-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-gn4wz\" (UID: \"c08b3eb6-9305-4cce-a2a0-07074c43432c\") " pod="openstack/dnsmasq-dns-578b8d767c-gn4wz" Nov 24 11:50:21 crc kubenswrapper[4789]: I1124 11:50:21.894728 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb7qf\" (UniqueName: \"kubernetes.io/projected/c08b3eb6-9305-4cce-a2a0-07074c43432c-kube-api-access-sb7qf\") pod \"dnsmasq-dns-578b8d767c-gn4wz\" (UID: \"c08b3eb6-9305-4cce-a2a0-07074c43432c\") " pod="openstack/dnsmasq-dns-578b8d767c-gn4wz" Nov 24 11:50:21 crc kubenswrapper[4789]: I1124 11:50:21.894817 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c08b3eb6-9305-4cce-a2a0-07074c43432c-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-gn4wz\" (UID: \"c08b3eb6-9305-4cce-a2a0-07074c43432c\") " pod="openstack/dnsmasq-dns-578b8d767c-gn4wz" Nov 24 11:50:21 crc kubenswrapper[4789]: I1124 11:50:21.894881 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c08b3eb6-9305-4cce-a2a0-07074c43432c-config\") pod \"dnsmasq-dns-578b8d767c-gn4wz\" (UID: \"c08b3eb6-9305-4cce-a2a0-07074c43432c\") " pod="openstack/dnsmasq-dns-578b8d767c-gn4wz" Nov 24 11:50:21 crc kubenswrapper[4789]: I1124 11:50:21.894899 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c08b3eb6-9305-4cce-a2a0-07074c43432c-dns-svc\") pod \"dnsmasq-dns-578b8d767c-gn4wz\" (UID: \"c08b3eb6-9305-4cce-a2a0-07074c43432c\") " pod="openstack/dnsmasq-dns-578b8d767c-gn4wz" Nov 24 11:50:21 crc kubenswrapper[4789]: I1124 11:50:21.894929 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c08b3eb6-9305-4cce-a2a0-07074c43432c-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-gn4wz\" (UID: \"c08b3eb6-9305-4cce-a2a0-07074c43432c\") " pod="openstack/dnsmasq-dns-578b8d767c-gn4wz" Nov 24 11:50:21 crc kubenswrapper[4789]: I1124 11:50:21.894955 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c08b3eb6-9305-4cce-a2a0-07074c43432c-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-gn4wz\" (UID: \"c08b3eb6-9305-4cce-a2a0-07074c43432c\") " pod="openstack/dnsmasq-dns-578b8d767c-gn4wz" Nov 24 11:50:21 crc kubenswrapper[4789]: I1124 11:50:21.895893 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c08b3eb6-9305-4cce-a2a0-07074c43432c-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-gn4wz\" (UID: \"c08b3eb6-9305-4cce-a2a0-07074c43432c\") " pod="openstack/dnsmasq-dns-578b8d767c-gn4wz" Nov 24 11:50:21 crc kubenswrapper[4789]: I1124 11:50:21.895974 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c08b3eb6-9305-4cce-a2a0-07074c43432c-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-gn4wz\" (UID: \"c08b3eb6-9305-4cce-a2a0-07074c43432c\") " pod="openstack/dnsmasq-dns-578b8d767c-gn4wz" Nov 24 11:50:21 crc kubenswrapper[4789]: I1124 11:50:21.896025 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c08b3eb6-9305-4cce-a2a0-07074c43432c-config\") pod \"dnsmasq-dns-578b8d767c-gn4wz\" (UID: \"c08b3eb6-9305-4cce-a2a0-07074c43432c\") " pod="openstack/dnsmasq-dns-578b8d767c-gn4wz" Nov 24 11:50:21 crc kubenswrapper[4789]: I1124 11:50:21.896444 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c08b3eb6-9305-4cce-a2a0-07074c43432c-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-gn4wz\" (UID: \"c08b3eb6-9305-4cce-a2a0-07074c43432c\") " pod="openstack/dnsmasq-dns-578b8d767c-gn4wz" Nov 24 11:50:21 crc kubenswrapper[4789]: I1124 11:50:21.896699 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c08b3eb6-9305-4cce-a2a0-07074c43432c-dns-svc\") pod \"dnsmasq-dns-578b8d767c-gn4wz\" (UID: \"c08b3eb6-9305-4cce-a2a0-07074c43432c\") " pod="openstack/dnsmasq-dns-578b8d767c-gn4wz" Nov 24 11:50:21 crc kubenswrapper[4789]: I1124 11:50:21.952157 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb7qf\" (UniqueName: \"kubernetes.io/projected/c08b3eb6-9305-4cce-a2a0-07074c43432c-kube-api-access-sb7qf\") pod \"dnsmasq-dns-578b8d767c-gn4wz\" (UID: \"c08b3eb6-9305-4cce-a2a0-07074c43432c\") " pod="openstack/dnsmasq-dns-578b8d767c-gn4wz" Nov 24 11:50:22 crc kubenswrapper[4789]: I1124 11:50:22.022660 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-gn4wz" Nov 24 11:50:22 crc kubenswrapper[4789]: I1124 11:50:22.490664 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-gn4wz"] Nov 24 11:50:22 crc kubenswrapper[4789]: W1124 11:50:22.498201 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc08b3eb6_9305_4cce_a2a0_07074c43432c.slice/crio-0796650faaa6e3edb978ad151f7a407ba4baff13335cb05b53d38125047d23df WatchSource:0}: Error finding container 0796650faaa6e3edb978ad151f7a407ba4baff13335cb05b53d38125047d23df: Status 404 returned error can't find the container with id 0796650faaa6e3edb978ad151f7a407ba4baff13335cb05b53d38125047d23df Nov 24 11:50:22 crc kubenswrapper[4789]: I1124 11:50:22.860853 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1652b281-174f-466f-9b1b-52006fe58620","Type":"ContainerStarted","Data":"e8f19e3fd95b83165b08eba778510e214d3a6a35b811d39e55bd5379b1f626b4"} Nov 24 11:50:22 crc kubenswrapper[4789]: I1124 11:50:22.862669 4789 generic.go:334] "Generic (PLEG): container finished" podID="c08b3eb6-9305-4cce-a2a0-07074c43432c" containerID="fad99d45196216183ae0079bb9071efad88d882833f6f20baa701e7964c22835" exitCode=0 Nov 24 11:50:22 crc kubenswrapper[4789]: I1124 11:50:22.862715 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-gn4wz" event={"ID":"c08b3eb6-9305-4cce-a2a0-07074c43432c","Type":"ContainerDied","Data":"fad99d45196216183ae0079bb9071efad88d882833f6f20baa701e7964c22835"} Nov 24 11:50:22 crc kubenswrapper[4789]: I1124 11:50:22.862741 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-gn4wz" event={"ID":"c08b3eb6-9305-4cce-a2a0-07074c43432c","Type":"ContainerStarted","Data":"0796650faaa6e3edb978ad151f7a407ba4baff13335cb05b53d38125047d23df"} Nov 24 11:50:23 crc kubenswrapper[4789]: I1124 11:50:23.874504 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-gn4wz" event={"ID":"c08b3eb6-9305-4cce-a2a0-07074c43432c","Type":"ContainerStarted","Data":"bd68fd9765988913b25adcd93f01fd0dacd3758141f4db2e37ea4383be190664"} Nov 24 11:50:23 crc kubenswrapper[4789]: I1124 11:50:23.905668 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-578b8d767c-gn4wz" podStartSLOduration=2.905645436 podStartE2EDuration="2.905645436s" podCreationTimestamp="2025-11-24 11:50:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:50:23.897245228 +0000 UTC m=+1206.479716627" watchObservedRunningTime="2025-11-24 11:50:23.905645436 +0000 UTC m=+1206.488116825" Nov 24 11:50:24 crc kubenswrapper[4789]: I1124 11:50:24.887245 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-578b8d767c-gn4wz" Nov 24 11:50:28 crc kubenswrapper[4789]: I1124 11:50:28.279510 4789 scope.go:117] "RemoveContainer" containerID="8c52d54908140cfcb365b6a1729a7027eb9a66bf1e7bb2a3d3c70fe2c1cdeada" Nov 24 11:50:28 crc kubenswrapper[4789]: I1124 11:50:28.308852 4789 scope.go:117] "RemoveContainer" containerID="76ebef0c80cdc9f2b47ef5f1613f0b509031d0ed84672d7551662b729c1af17b" Nov 24 11:50:28 crc kubenswrapper[4789]: I1124 11:50:28.330372 4789 scope.go:117] "RemoveContainer" containerID="03480dce90f9f0aa8e2752b06fc29358b14eb461e687c18ac7590dd074a74c22" Nov 24 11:50:32 crc kubenswrapper[4789]: I1124 11:50:32.024744 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-578b8d767c-gn4wz" Nov 24 11:50:32 crc kubenswrapper[4789]: I1124 11:50:32.139195 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-sdbck"] Nov 24 11:50:32 crc kubenswrapper[4789]: I1124 11:50:32.139498 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68d4b6d797-sdbck" podUID="aac75533-96ca-444e-9f80-862d3dab3959" containerName="dnsmasq-dns" containerID="cri-o://5209db0bfe8965166e880c4a785d854b2f57b620556bf42fa836d4e12cf34859" gracePeriod=10 Nov 24 11:50:32 crc kubenswrapper[4789]: I1124 11:50:32.369827 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69fd9b48bc-fwmqb"] Nov 24 11:50:32 crc kubenswrapper[4789]: I1124 11:50:32.373941 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69fd9b48bc-fwmqb" Nov 24 11:50:32 crc kubenswrapper[4789]: I1124 11:50:32.410628 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69fd9b48bc-fwmqb"] Nov 24 11:50:32 crc kubenswrapper[4789]: I1124 11:50:32.530772 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ffa9725-d57a-4cbd-8fbd-84702ae4799e-config\") pod \"dnsmasq-dns-69fd9b48bc-fwmqb\" (UID: \"0ffa9725-d57a-4cbd-8fbd-84702ae4799e\") " pod="openstack/dnsmasq-dns-69fd9b48bc-fwmqb" Nov 24 11:50:32 crc kubenswrapper[4789]: I1124 11:50:32.530888 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mtzp\" (UniqueName: \"kubernetes.io/projected/0ffa9725-d57a-4cbd-8fbd-84702ae4799e-kube-api-access-9mtzp\") pod \"dnsmasq-dns-69fd9b48bc-fwmqb\" (UID: \"0ffa9725-d57a-4cbd-8fbd-84702ae4799e\") " pod="openstack/dnsmasq-dns-69fd9b48bc-fwmqb" Nov 24 11:50:32 crc kubenswrapper[4789]: I1124 11:50:32.530943 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ffa9725-d57a-4cbd-8fbd-84702ae4799e-ovsdbserver-nb\") pod \"dnsmasq-dns-69fd9b48bc-fwmqb\" (UID: \"0ffa9725-d57a-4cbd-8fbd-84702ae4799e\") " pod="openstack/dnsmasq-dns-69fd9b48bc-fwmqb" Nov 24 11:50:32 crc kubenswrapper[4789]: I1124 11:50:32.530964 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ffa9725-d57a-4cbd-8fbd-84702ae4799e-ovsdbserver-sb\") pod \"dnsmasq-dns-69fd9b48bc-fwmqb\" (UID: \"0ffa9725-d57a-4cbd-8fbd-84702ae4799e\") " pod="openstack/dnsmasq-dns-69fd9b48bc-fwmqb" Nov 24 11:50:32 crc kubenswrapper[4789]: I1124 11:50:32.530996 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ffa9725-d57a-4cbd-8fbd-84702ae4799e-dns-svc\") pod \"dnsmasq-dns-69fd9b48bc-fwmqb\" (UID: \"0ffa9725-d57a-4cbd-8fbd-84702ae4799e\") " pod="openstack/dnsmasq-dns-69fd9b48bc-fwmqb" Nov 24 11:50:32 crc kubenswrapper[4789]: I1124 11:50:32.531032 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0ffa9725-d57a-4cbd-8fbd-84702ae4799e-openstack-edpm-ipam\") pod \"dnsmasq-dns-69fd9b48bc-fwmqb\" (UID: \"0ffa9725-d57a-4cbd-8fbd-84702ae4799e\") " pod="openstack/dnsmasq-dns-69fd9b48bc-fwmqb" Nov 24 11:50:32 crc kubenswrapper[4789]: I1124 11:50:32.632671 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mtzp\" (UniqueName: \"kubernetes.io/projected/0ffa9725-d57a-4cbd-8fbd-84702ae4799e-kube-api-access-9mtzp\") pod \"dnsmasq-dns-69fd9b48bc-fwmqb\" (UID: \"0ffa9725-d57a-4cbd-8fbd-84702ae4799e\") " pod="openstack/dnsmasq-dns-69fd9b48bc-fwmqb" Nov 24 11:50:32 crc kubenswrapper[4789]: I1124 11:50:32.633054 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ffa9725-d57a-4cbd-8fbd-84702ae4799e-ovsdbserver-sb\") pod \"dnsmasq-dns-69fd9b48bc-fwmqb\" (UID: \"0ffa9725-d57a-4cbd-8fbd-84702ae4799e\") " pod="openstack/dnsmasq-dns-69fd9b48bc-fwmqb" Nov 24 11:50:32 crc kubenswrapper[4789]: I1124 11:50:32.633072 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ffa9725-d57a-4cbd-8fbd-84702ae4799e-ovsdbserver-nb\") pod \"dnsmasq-dns-69fd9b48bc-fwmqb\" (UID: \"0ffa9725-d57a-4cbd-8fbd-84702ae4799e\") " pod="openstack/dnsmasq-dns-69fd9b48bc-fwmqb" Nov 24 11:50:32 crc kubenswrapper[4789]: I1124 11:50:32.633113 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ffa9725-d57a-4cbd-8fbd-84702ae4799e-dns-svc\") pod \"dnsmasq-dns-69fd9b48bc-fwmqb\" (UID: \"0ffa9725-d57a-4cbd-8fbd-84702ae4799e\") " pod="openstack/dnsmasq-dns-69fd9b48bc-fwmqb" Nov 24 11:50:32 crc kubenswrapper[4789]: I1124 11:50:32.633145 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0ffa9725-d57a-4cbd-8fbd-84702ae4799e-openstack-edpm-ipam\") pod \"dnsmasq-dns-69fd9b48bc-fwmqb\" (UID: \"0ffa9725-d57a-4cbd-8fbd-84702ae4799e\") " pod="openstack/dnsmasq-dns-69fd9b48bc-fwmqb" Nov 24 11:50:32 crc kubenswrapper[4789]: I1124 11:50:32.633178 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ffa9725-d57a-4cbd-8fbd-84702ae4799e-config\") pod \"dnsmasq-dns-69fd9b48bc-fwmqb\" (UID: \"0ffa9725-d57a-4cbd-8fbd-84702ae4799e\") " pod="openstack/dnsmasq-dns-69fd9b48bc-fwmqb" Nov 24 11:50:32 crc kubenswrapper[4789]: I1124 11:50:32.634060 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ffa9725-d57a-4cbd-8fbd-84702ae4799e-config\") pod \"dnsmasq-dns-69fd9b48bc-fwmqb\" (UID: \"0ffa9725-d57a-4cbd-8fbd-84702ae4799e\") " pod="openstack/dnsmasq-dns-69fd9b48bc-fwmqb" Nov 24 11:50:32 crc kubenswrapper[4789]: I1124 11:50:32.635009 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ffa9725-d57a-4cbd-8fbd-84702ae4799e-ovsdbserver-nb\") pod \"dnsmasq-dns-69fd9b48bc-fwmqb\" (UID: \"0ffa9725-d57a-4cbd-8fbd-84702ae4799e\") " pod="openstack/dnsmasq-dns-69fd9b48bc-fwmqb" Nov 24 11:50:32 crc kubenswrapper[4789]: I1124 11:50:32.635169 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ffa9725-d57a-4cbd-8fbd-84702ae4799e-dns-svc\") pod \"dnsmasq-dns-69fd9b48bc-fwmqb\" (UID: \"0ffa9725-d57a-4cbd-8fbd-84702ae4799e\") " pod="openstack/dnsmasq-dns-69fd9b48bc-fwmqb" Nov 24 11:50:32 crc kubenswrapper[4789]: I1124 11:50:32.635262 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0ffa9725-d57a-4cbd-8fbd-84702ae4799e-openstack-edpm-ipam\") pod \"dnsmasq-dns-69fd9b48bc-fwmqb\" (UID: \"0ffa9725-d57a-4cbd-8fbd-84702ae4799e\") " pod="openstack/dnsmasq-dns-69fd9b48bc-fwmqb" Nov 24 11:50:32 crc kubenswrapper[4789]: I1124 11:50:32.636145 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ffa9725-d57a-4cbd-8fbd-84702ae4799e-ovsdbserver-sb\") pod \"dnsmasq-dns-69fd9b48bc-fwmqb\" (UID: \"0ffa9725-d57a-4cbd-8fbd-84702ae4799e\") " pod="openstack/dnsmasq-dns-69fd9b48bc-fwmqb" Nov 24 11:50:32 crc kubenswrapper[4789]: I1124 11:50:32.654653 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mtzp\" (UniqueName: \"kubernetes.io/projected/0ffa9725-d57a-4cbd-8fbd-84702ae4799e-kube-api-access-9mtzp\") pod \"dnsmasq-dns-69fd9b48bc-fwmqb\" (UID: \"0ffa9725-d57a-4cbd-8fbd-84702ae4799e\") " pod="openstack/dnsmasq-dns-69fd9b48bc-fwmqb" Nov 24 11:50:32 crc kubenswrapper[4789]: I1124 11:50:32.702227 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69fd9b48bc-fwmqb" Nov 24 11:50:32 crc kubenswrapper[4789]: I1124 11:50:32.817932 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-sdbck" Nov 24 11:50:32 crc kubenswrapper[4789]: I1124 11:50:32.941720 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aac75533-96ca-444e-9f80-862d3dab3959-ovsdbserver-nb\") pod \"aac75533-96ca-444e-9f80-862d3dab3959\" (UID: \"aac75533-96ca-444e-9f80-862d3dab3959\") " Nov 24 11:50:32 crc kubenswrapper[4789]: I1124 11:50:32.941794 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aac75533-96ca-444e-9f80-862d3dab3959-dns-svc\") pod \"aac75533-96ca-444e-9f80-862d3dab3959\" (UID: \"aac75533-96ca-444e-9f80-862d3dab3959\") " Nov 24 11:50:32 crc kubenswrapper[4789]: I1124 11:50:32.941825 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aac75533-96ca-444e-9f80-862d3dab3959-config\") pod \"aac75533-96ca-444e-9f80-862d3dab3959\" (UID: \"aac75533-96ca-444e-9f80-862d3dab3959\") " Nov 24 11:50:32 crc kubenswrapper[4789]: I1124 11:50:32.941887 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4bk5\" (UniqueName: \"kubernetes.io/projected/aac75533-96ca-444e-9f80-862d3dab3959-kube-api-access-f4bk5\") pod \"aac75533-96ca-444e-9f80-862d3dab3959\" (UID: \"aac75533-96ca-444e-9f80-862d3dab3959\") " Nov 24 11:50:32 crc kubenswrapper[4789]: I1124 11:50:32.941958 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aac75533-96ca-444e-9f80-862d3dab3959-ovsdbserver-sb\") pod \"aac75533-96ca-444e-9f80-862d3dab3959\" (UID: \"aac75533-96ca-444e-9f80-862d3dab3959\") " Nov 24 11:50:32 crc kubenswrapper[4789]: I1124 11:50:32.949760 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aac75533-96ca-444e-9f80-862d3dab3959-kube-api-access-f4bk5" (OuterVolumeSpecName: "kube-api-access-f4bk5") pod "aac75533-96ca-444e-9f80-862d3dab3959" (UID: "aac75533-96ca-444e-9f80-862d3dab3959"). InnerVolumeSpecName "kube-api-access-f4bk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:50:32 crc kubenswrapper[4789]: I1124 11:50:32.980954 4789 generic.go:334] "Generic (PLEG): container finished" podID="aac75533-96ca-444e-9f80-862d3dab3959" containerID="5209db0bfe8965166e880c4a785d854b2f57b620556bf42fa836d4e12cf34859" exitCode=0 Nov 24 11:50:32 crc kubenswrapper[4789]: I1124 11:50:32.981261 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-sdbck" event={"ID":"aac75533-96ca-444e-9f80-862d3dab3959","Type":"ContainerDied","Data":"5209db0bfe8965166e880c4a785d854b2f57b620556bf42fa836d4e12cf34859"} Nov 24 11:50:32 crc kubenswrapper[4789]: I1124 11:50:32.981287 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-sdbck" event={"ID":"aac75533-96ca-444e-9f80-862d3dab3959","Type":"ContainerDied","Data":"6cc27eec0ae7a1ae99d7904311d401d468dc13b707949854435e5fb198e54e7f"} Nov 24 11:50:32 crc kubenswrapper[4789]: I1124 11:50:32.981303 4789 scope.go:117] "RemoveContainer" containerID="5209db0bfe8965166e880c4a785d854b2f57b620556bf42fa836d4e12cf34859" Nov 24 11:50:32 crc kubenswrapper[4789]: I1124 11:50:32.981405 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-sdbck" Nov 24 11:50:33 crc kubenswrapper[4789]: I1124 11:50:33.004585 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aac75533-96ca-444e-9f80-862d3dab3959-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "aac75533-96ca-444e-9f80-862d3dab3959" (UID: "aac75533-96ca-444e-9f80-862d3dab3959"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:50:33 crc kubenswrapper[4789]: I1124 11:50:33.010699 4789 scope.go:117] "RemoveContainer" containerID="9c62e910e71d5ba3cb4d7a524e6458ce2586a5a4a69901cf96908dbf3bec5b48" Nov 24 11:50:33 crc kubenswrapper[4789]: I1124 11:50:33.011101 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aac75533-96ca-444e-9f80-862d3dab3959-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "aac75533-96ca-444e-9f80-862d3dab3959" (UID: "aac75533-96ca-444e-9f80-862d3dab3959"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:50:33 crc kubenswrapper[4789]: I1124 11:50:33.032151 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aac75533-96ca-444e-9f80-862d3dab3959-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aac75533-96ca-444e-9f80-862d3dab3959" (UID: "aac75533-96ca-444e-9f80-862d3dab3959"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:50:33 crc kubenswrapper[4789]: I1124 11:50:33.039493 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aac75533-96ca-444e-9f80-862d3dab3959-config" (OuterVolumeSpecName: "config") pod "aac75533-96ca-444e-9f80-862d3dab3959" (UID: "aac75533-96ca-444e-9f80-862d3dab3959"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:50:33 crc kubenswrapper[4789]: I1124 11:50:33.047757 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aac75533-96ca-444e-9f80-862d3dab3959-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:33 crc kubenswrapper[4789]: I1124 11:50:33.047787 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aac75533-96ca-444e-9f80-862d3dab3959-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:33 crc kubenswrapper[4789]: I1124 11:50:33.047797 4789 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aac75533-96ca-444e-9f80-862d3dab3959-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:33 crc kubenswrapper[4789]: I1124 11:50:33.047808 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aac75533-96ca-444e-9f80-862d3dab3959-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:33 crc kubenswrapper[4789]: I1124 11:50:33.047818 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4bk5\" (UniqueName: \"kubernetes.io/projected/aac75533-96ca-444e-9f80-862d3dab3959-kube-api-access-f4bk5\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:33 crc kubenswrapper[4789]: I1124 11:50:33.051687 4789 scope.go:117] "RemoveContainer" containerID="5209db0bfe8965166e880c4a785d854b2f57b620556bf42fa836d4e12cf34859" Nov 24 11:50:33 crc kubenswrapper[4789]: E1124 11:50:33.052451 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5209db0bfe8965166e880c4a785d854b2f57b620556bf42fa836d4e12cf34859\": container with ID starting with 5209db0bfe8965166e880c4a785d854b2f57b620556bf42fa836d4e12cf34859 not found: ID does not exist" containerID="5209db0bfe8965166e880c4a785d854b2f57b620556bf42fa836d4e12cf34859" Nov 24 11:50:33 crc kubenswrapper[4789]: I1124 11:50:33.052511 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5209db0bfe8965166e880c4a785d854b2f57b620556bf42fa836d4e12cf34859"} err="failed to get container status \"5209db0bfe8965166e880c4a785d854b2f57b620556bf42fa836d4e12cf34859\": rpc error: code = NotFound desc = could not find container \"5209db0bfe8965166e880c4a785d854b2f57b620556bf42fa836d4e12cf34859\": container with ID starting with 5209db0bfe8965166e880c4a785d854b2f57b620556bf42fa836d4e12cf34859 not found: ID does not exist" Nov 24 11:50:33 crc kubenswrapper[4789]: I1124 11:50:33.052551 4789 scope.go:117] "RemoveContainer" containerID="9c62e910e71d5ba3cb4d7a524e6458ce2586a5a4a69901cf96908dbf3bec5b48" Nov 24 11:50:33 crc kubenswrapper[4789]: E1124 11:50:33.052822 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c62e910e71d5ba3cb4d7a524e6458ce2586a5a4a69901cf96908dbf3bec5b48\": container with ID starting with 9c62e910e71d5ba3cb4d7a524e6458ce2586a5a4a69901cf96908dbf3bec5b48 not found: ID does not exist" containerID="9c62e910e71d5ba3cb4d7a524e6458ce2586a5a4a69901cf96908dbf3bec5b48" Nov 24 11:50:33 crc kubenswrapper[4789]: I1124 11:50:33.052843 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c62e910e71d5ba3cb4d7a524e6458ce2586a5a4a69901cf96908dbf3bec5b48"} err="failed to get container status \"9c62e910e71d5ba3cb4d7a524e6458ce2586a5a4a69901cf96908dbf3bec5b48\": rpc error: code = NotFound desc = could not find container \"9c62e910e71d5ba3cb4d7a524e6458ce2586a5a4a69901cf96908dbf3bec5b48\": container with ID starting with 9c62e910e71d5ba3cb4d7a524e6458ce2586a5a4a69901cf96908dbf3bec5b48 not found: ID does not exist" Nov 24 11:50:33 crc kubenswrapper[4789]: I1124 11:50:33.243946 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69fd9b48bc-fwmqb"] Nov 24 11:50:33 crc kubenswrapper[4789]: I1124 11:50:33.420773 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-sdbck"] Nov 24 11:50:33 crc kubenswrapper[4789]: I1124 11:50:33.430224 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-sdbck"] Nov 24 11:50:33 crc kubenswrapper[4789]: I1124 11:50:33.992844 4789 generic.go:334] "Generic (PLEG): container finished" podID="0ffa9725-d57a-4cbd-8fbd-84702ae4799e" containerID="6357f59ab8c794fda86164a3dc2c7326833685d48bae3c0e0964a98f82254996" exitCode=0 Nov 24 11:50:33 crc kubenswrapper[4789]: I1124 11:50:33.992922 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69fd9b48bc-fwmqb" event={"ID":"0ffa9725-d57a-4cbd-8fbd-84702ae4799e","Type":"ContainerDied","Data":"6357f59ab8c794fda86164a3dc2c7326833685d48bae3c0e0964a98f82254996"} Nov 24 11:50:33 crc kubenswrapper[4789]: I1124 11:50:33.992954 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69fd9b48bc-fwmqb" event={"ID":"0ffa9725-d57a-4cbd-8fbd-84702ae4799e","Type":"ContainerStarted","Data":"ad70007dd1f8fc59cae6668072b19b7d2d64790f498938348f1be639c8b3c7a8"} Nov 24 11:50:34 crc kubenswrapper[4789]: I1124 11:50:34.179879 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aac75533-96ca-444e-9f80-862d3dab3959" path="/var/lib/kubelet/pods/aac75533-96ca-444e-9f80-862d3dab3959/volumes" Nov 24 11:50:35 crc kubenswrapper[4789]: I1124 11:50:35.011545 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69fd9b48bc-fwmqb" event={"ID":"0ffa9725-d57a-4cbd-8fbd-84702ae4799e","Type":"ContainerStarted","Data":"edb1072e5f286199228a3239631f42a5d9841e91255c91270ba9ade0103a5c0f"} Nov 24 11:50:35 crc kubenswrapper[4789]: I1124 11:50:35.012146 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69fd9b48bc-fwmqb" Nov 24 11:50:35 crc kubenswrapper[4789]: I1124 11:50:35.047873 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-69fd9b48bc-fwmqb" podStartSLOduration=3.04784438 podStartE2EDuration="3.04784438s" podCreationTimestamp="2025-11-24 11:50:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:50:35.033904571 +0000 UTC m=+1217.616376030" watchObservedRunningTime="2025-11-24 11:50:35.04784438 +0000 UTC m=+1217.630315799" Nov 24 11:50:37 crc kubenswrapper[4789]: I1124 11:50:37.585048 4789 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-68d4b6d797-sdbck" podUID="aac75533-96ca-444e-9f80-862d3dab3959" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.182:5353: i/o timeout" Nov 24 11:50:42 crc kubenswrapper[4789]: I1124 11:50:42.704645 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-69fd9b48bc-fwmqb" Nov 24 11:50:42 crc kubenswrapper[4789]: I1124 11:50:42.802020 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-gn4wz"] Nov 24 11:50:42 crc kubenswrapper[4789]: I1124 11:50:42.802237 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-578b8d767c-gn4wz" podUID="c08b3eb6-9305-4cce-a2a0-07074c43432c" containerName="dnsmasq-dns" containerID="cri-o://bd68fd9765988913b25adcd93f01fd0dacd3758141f4db2e37ea4383be190664" gracePeriod=10 Nov 24 11:50:43 crc kubenswrapper[4789]: I1124 11:50:43.105792 4789 generic.go:334] "Generic (PLEG): container finished" podID="c08b3eb6-9305-4cce-a2a0-07074c43432c" containerID="bd68fd9765988913b25adcd93f01fd0dacd3758141f4db2e37ea4383be190664" exitCode=0 Nov 24 11:50:43 crc kubenswrapper[4789]: I1124 11:50:43.106104 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-gn4wz" event={"ID":"c08b3eb6-9305-4cce-a2a0-07074c43432c","Type":"ContainerDied","Data":"bd68fd9765988913b25adcd93f01fd0dacd3758141f4db2e37ea4383be190664"} Nov 24 11:50:43 crc kubenswrapper[4789]: I1124 11:50:43.270119 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-gn4wz" Nov 24 11:50:43 crc kubenswrapper[4789]: I1124 11:50:43.368803 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c08b3eb6-9305-4cce-a2a0-07074c43432c-ovsdbserver-nb\") pod \"c08b3eb6-9305-4cce-a2a0-07074c43432c\" (UID: \"c08b3eb6-9305-4cce-a2a0-07074c43432c\") " Nov 24 11:50:43 crc kubenswrapper[4789]: I1124 11:50:43.368883 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c08b3eb6-9305-4cce-a2a0-07074c43432c-config\") pod \"c08b3eb6-9305-4cce-a2a0-07074c43432c\" (UID: \"c08b3eb6-9305-4cce-a2a0-07074c43432c\") " Nov 24 11:50:43 crc kubenswrapper[4789]: I1124 11:50:43.369064 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c08b3eb6-9305-4cce-a2a0-07074c43432c-openstack-edpm-ipam\") pod \"c08b3eb6-9305-4cce-a2a0-07074c43432c\" (UID: \"c08b3eb6-9305-4cce-a2a0-07074c43432c\") " Nov 24 11:50:43 crc kubenswrapper[4789]: I1124 11:50:43.369095 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c08b3eb6-9305-4cce-a2a0-07074c43432c-dns-svc\") pod \"c08b3eb6-9305-4cce-a2a0-07074c43432c\" (UID: \"c08b3eb6-9305-4cce-a2a0-07074c43432c\") " Nov 24 11:50:43 crc kubenswrapper[4789]: I1124 11:50:43.369116 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb7qf\" (UniqueName: \"kubernetes.io/projected/c08b3eb6-9305-4cce-a2a0-07074c43432c-kube-api-access-sb7qf\") pod \"c08b3eb6-9305-4cce-a2a0-07074c43432c\" (UID: \"c08b3eb6-9305-4cce-a2a0-07074c43432c\") " Nov 24 11:50:43 crc kubenswrapper[4789]: I1124 11:50:43.369132 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c08b3eb6-9305-4cce-a2a0-07074c43432c-ovsdbserver-sb\") pod \"c08b3eb6-9305-4cce-a2a0-07074c43432c\" (UID: \"c08b3eb6-9305-4cce-a2a0-07074c43432c\") " Nov 24 11:50:43 crc kubenswrapper[4789]: I1124 11:50:43.391636 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c08b3eb6-9305-4cce-a2a0-07074c43432c-kube-api-access-sb7qf" (OuterVolumeSpecName: "kube-api-access-sb7qf") pod "c08b3eb6-9305-4cce-a2a0-07074c43432c" (UID: "c08b3eb6-9305-4cce-a2a0-07074c43432c"). InnerVolumeSpecName "kube-api-access-sb7qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:50:43 crc kubenswrapper[4789]: I1124 11:50:43.446317 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c08b3eb6-9305-4cce-a2a0-07074c43432c-config" (OuterVolumeSpecName: "config") pod "c08b3eb6-9305-4cce-a2a0-07074c43432c" (UID: "c08b3eb6-9305-4cce-a2a0-07074c43432c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:50:43 crc kubenswrapper[4789]: I1124 11:50:43.462011 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c08b3eb6-9305-4cce-a2a0-07074c43432c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c08b3eb6-9305-4cce-a2a0-07074c43432c" (UID: "c08b3eb6-9305-4cce-a2a0-07074c43432c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:50:43 crc kubenswrapper[4789]: I1124 11:50:43.462815 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c08b3eb6-9305-4cce-a2a0-07074c43432c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c08b3eb6-9305-4cce-a2a0-07074c43432c" (UID: "c08b3eb6-9305-4cce-a2a0-07074c43432c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:50:43 crc kubenswrapper[4789]: I1124 11:50:43.466531 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c08b3eb6-9305-4cce-a2a0-07074c43432c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c08b3eb6-9305-4cce-a2a0-07074c43432c" (UID: "c08b3eb6-9305-4cce-a2a0-07074c43432c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:50:43 crc kubenswrapper[4789]: I1124 11:50:43.467054 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c08b3eb6-9305-4cce-a2a0-07074c43432c-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "c08b3eb6-9305-4cce-a2a0-07074c43432c" (UID: "c08b3eb6-9305-4cce-a2a0-07074c43432c"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:50:43 crc kubenswrapper[4789]: I1124 11:50:43.471043 4789 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c08b3eb6-9305-4cce-a2a0-07074c43432c-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:43 crc kubenswrapper[4789]: I1124 11:50:43.471067 4789 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c08b3eb6-9305-4cce-a2a0-07074c43432c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:43 crc kubenswrapper[4789]: I1124 11:50:43.471079 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb7qf\" (UniqueName: \"kubernetes.io/projected/c08b3eb6-9305-4cce-a2a0-07074c43432c-kube-api-access-sb7qf\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:43 crc kubenswrapper[4789]: I1124 11:50:43.471088 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c08b3eb6-9305-4cce-a2a0-07074c43432c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:43 crc kubenswrapper[4789]: I1124 11:50:43.471097 4789 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c08b3eb6-9305-4cce-a2a0-07074c43432c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:43 crc kubenswrapper[4789]: I1124 11:50:43.471105 4789 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c08b3eb6-9305-4cce-a2a0-07074c43432c-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:44 crc kubenswrapper[4789]: I1124 11:50:44.116679 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-gn4wz" event={"ID":"c08b3eb6-9305-4cce-a2a0-07074c43432c","Type":"ContainerDied","Data":"0796650faaa6e3edb978ad151f7a407ba4baff13335cb05b53d38125047d23df"} Nov 24 11:50:44 crc kubenswrapper[4789]: I1124 11:50:44.116747 4789 scope.go:117] "RemoveContainer" containerID="bd68fd9765988913b25adcd93f01fd0dacd3758141f4db2e37ea4383be190664" Nov 24 11:50:44 crc kubenswrapper[4789]: I1124 11:50:44.116768 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-gn4wz" Nov 24 11:50:44 crc kubenswrapper[4789]: I1124 11:50:44.139530 4789 scope.go:117] "RemoveContainer" containerID="fad99d45196216183ae0079bb9071efad88d882833f6f20baa701e7964c22835" Nov 24 11:50:44 crc kubenswrapper[4789]: I1124 11:50:44.160877 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-gn4wz"] Nov 24 11:50:44 crc kubenswrapper[4789]: I1124 11:50:44.168883 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-gn4wz"] Nov 24 11:50:44 crc kubenswrapper[4789]: I1124 11:50:44.189933 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c08b3eb6-9305-4cce-a2a0-07074c43432c" path="/var/lib/kubelet/pods/c08b3eb6-9305-4cce-a2a0-07074c43432c/volumes" Nov 24 11:50:50 crc kubenswrapper[4789]: I1124 11:50:50.162331 4789 patch_prober.go:28] interesting pod/machine-config-daemon-9czvn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:50:50 crc kubenswrapper[4789]: I1124 11:50:50.162944 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:50:52 crc kubenswrapper[4789]: I1124 11:50:52.868209 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bbkm6"] Nov 24 11:50:52 crc kubenswrapper[4789]: E1124 11:50:52.870444 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aac75533-96ca-444e-9f80-862d3dab3959" containerName="init" Nov 24 11:50:52 crc kubenswrapper[4789]: I1124 11:50:52.870668 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="aac75533-96ca-444e-9f80-862d3dab3959" containerName="init" Nov 24 11:50:52 crc kubenswrapper[4789]: E1124 11:50:52.870823 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c08b3eb6-9305-4cce-a2a0-07074c43432c" containerName="dnsmasq-dns" Nov 24 11:50:52 crc kubenswrapper[4789]: I1124 11:50:52.870942 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="c08b3eb6-9305-4cce-a2a0-07074c43432c" containerName="dnsmasq-dns" Nov 24 11:50:52 crc kubenswrapper[4789]: E1124 11:50:52.871128 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aac75533-96ca-444e-9f80-862d3dab3959" containerName="dnsmasq-dns" Nov 24 11:50:52 crc kubenswrapper[4789]: I1124 11:50:52.871254 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="aac75533-96ca-444e-9f80-862d3dab3959" containerName="dnsmasq-dns" Nov 24 11:50:52 crc kubenswrapper[4789]: E1124 11:50:52.871404 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c08b3eb6-9305-4cce-a2a0-07074c43432c" containerName="init" Nov 24 11:50:52 crc kubenswrapper[4789]: I1124 11:50:52.871559 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="c08b3eb6-9305-4cce-a2a0-07074c43432c" containerName="init" Nov 24 11:50:52 crc kubenswrapper[4789]: I1124 11:50:52.871933 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="c08b3eb6-9305-4cce-a2a0-07074c43432c" containerName="dnsmasq-dns" Nov 24 11:50:52 crc kubenswrapper[4789]: I1124 11:50:52.872070 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="aac75533-96ca-444e-9f80-862d3dab3959" containerName="dnsmasq-dns" Nov 24 11:50:52 crc kubenswrapper[4789]: I1124 11:50:52.873182 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bbkm6" Nov 24 11:50:52 crc kubenswrapper[4789]: I1124 11:50:52.878965 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lhfjg" Nov 24 11:50:52 crc kubenswrapper[4789]: I1124 11:50:52.879709 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:50:52 crc kubenswrapper[4789]: I1124 11:50:52.879877 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:50:52 crc kubenswrapper[4789]: I1124 11:50:52.880295 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:50:52 crc kubenswrapper[4789]: I1124 11:50:52.901528 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bbkm6"] Nov 24 11:50:52 crc kubenswrapper[4789]: I1124 11:50:52.951216 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9fa6d7a2-c7df-413c-8a31-3d7e76031554-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bbkm6\" (UID: \"9fa6d7a2-c7df-413c-8a31-3d7e76031554\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bbkm6" Nov 24 11:50:52 crc kubenswrapper[4789]: I1124 11:50:52.951275 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fa6d7a2-c7df-413c-8a31-3d7e76031554-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bbkm6\" (UID: \"9fa6d7a2-c7df-413c-8a31-3d7e76031554\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bbkm6" Nov 24 11:50:52 crc kubenswrapper[4789]: I1124 11:50:52.951391 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnz7x\" (UniqueName: \"kubernetes.io/projected/9fa6d7a2-c7df-413c-8a31-3d7e76031554-kube-api-access-xnz7x\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bbkm6\" (UID: \"9fa6d7a2-c7df-413c-8a31-3d7e76031554\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bbkm6" Nov 24 11:50:52 crc kubenswrapper[4789]: I1124 11:50:52.951439 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9fa6d7a2-c7df-413c-8a31-3d7e76031554-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bbkm6\" (UID: \"9fa6d7a2-c7df-413c-8a31-3d7e76031554\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bbkm6" Nov 24 11:50:53 crc kubenswrapper[4789]: I1124 11:50:53.052861 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnz7x\" (UniqueName: \"kubernetes.io/projected/9fa6d7a2-c7df-413c-8a31-3d7e76031554-kube-api-access-xnz7x\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bbkm6\" (UID: \"9fa6d7a2-c7df-413c-8a31-3d7e76031554\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bbkm6" Nov 24 11:50:53 crc kubenswrapper[4789]: I1124 11:50:53.052931 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9fa6d7a2-c7df-413c-8a31-3d7e76031554-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bbkm6\" (UID: \"9fa6d7a2-c7df-413c-8a31-3d7e76031554\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bbkm6" Nov 24 11:50:53 crc kubenswrapper[4789]: I1124 11:50:53.053001 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9fa6d7a2-c7df-413c-8a31-3d7e76031554-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bbkm6\" (UID: \"9fa6d7a2-c7df-413c-8a31-3d7e76031554\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bbkm6" Nov 24 11:50:53 crc kubenswrapper[4789]: I1124 11:50:53.053036 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fa6d7a2-c7df-413c-8a31-3d7e76031554-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bbkm6\" (UID: \"9fa6d7a2-c7df-413c-8a31-3d7e76031554\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bbkm6" Nov 24 11:50:53 crc kubenswrapper[4789]: I1124 11:50:53.060256 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9fa6d7a2-c7df-413c-8a31-3d7e76031554-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bbkm6\" (UID: \"9fa6d7a2-c7df-413c-8a31-3d7e76031554\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bbkm6" Nov 24 11:50:53 crc kubenswrapper[4789]: I1124 11:50:53.061107 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9fa6d7a2-c7df-413c-8a31-3d7e76031554-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bbkm6\" (UID: \"9fa6d7a2-c7df-413c-8a31-3d7e76031554\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bbkm6" Nov 24 11:50:53 crc kubenswrapper[4789]: I1124 11:50:53.064013 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fa6d7a2-c7df-413c-8a31-3d7e76031554-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bbkm6\" (UID: \"9fa6d7a2-c7df-413c-8a31-3d7e76031554\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bbkm6" Nov 24 11:50:53 crc kubenswrapper[4789]: I1124 11:50:53.077778 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnz7x\" (UniqueName: \"kubernetes.io/projected/9fa6d7a2-c7df-413c-8a31-3d7e76031554-kube-api-access-xnz7x\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bbkm6\" (UID: \"9fa6d7a2-c7df-413c-8a31-3d7e76031554\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bbkm6" Nov 24 11:50:53 crc kubenswrapper[4789]: I1124 11:50:53.192059 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bbkm6" Nov 24 11:50:53 crc kubenswrapper[4789]: I1124 11:50:53.215657 4789 generic.go:334] "Generic (PLEG): container finished" podID="61dd768a-2e14-4e8f-89da-0feeb90b9796" containerID="1ed54a6efabd34e42c0ecb5fa2ff1942fe1b29c293fbd08ddaeb8d60379ee9ca" exitCode=0 Nov 24 11:50:53 crc kubenswrapper[4789]: I1124 11:50:53.215703 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"61dd768a-2e14-4e8f-89da-0feeb90b9796","Type":"ContainerDied","Data":"1ed54a6efabd34e42c0ecb5fa2ff1942fe1b29c293fbd08ddaeb8d60379ee9ca"} Nov 24 11:50:53 crc kubenswrapper[4789]: I1124 11:50:53.806556 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bbkm6"] Nov 24 11:50:53 crc kubenswrapper[4789]: W1124 11:50:53.809431 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9fa6d7a2_c7df_413c_8a31_3d7e76031554.slice/crio-6eecb42ee026b921584229f6619469b2101ebeef9b530bc0177066579cfc8e1d WatchSource:0}: Error finding container 6eecb42ee026b921584229f6619469b2101ebeef9b530bc0177066579cfc8e1d: Status 404 returned error can't find the container with id 6eecb42ee026b921584229f6619469b2101ebeef9b530bc0177066579cfc8e1d Nov 24 11:50:53 crc kubenswrapper[4789]: I1124 11:50:53.812716 4789 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 11:50:54 crc kubenswrapper[4789]: I1124 11:50:54.225292 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bbkm6" event={"ID":"9fa6d7a2-c7df-413c-8a31-3d7e76031554","Type":"ContainerStarted","Data":"6eecb42ee026b921584229f6619469b2101ebeef9b530bc0177066579cfc8e1d"} Nov 24 11:50:54 crc kubenswrapper[4789]: I1124 11:50:54.228140 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"61dd768a-2e14-4e8f-89da-0feeb90b9796","Type":"ContainerStarted","Data":"94c47fde4aff1b19e59f773a95405e6928e591deda88beffd0a3ea70ebb316a8"} Nov 24 11:50:54 crc kubenswrapper[4789]: I1124 11:50:54.228395 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 24 11:50:54 crc kubenswrapper[4789]: I1124 11:50:54.276187 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.276164003 podStartE2EDuration="37.276164003s" podCreationTimestamp="2025-11-24 11:50:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:50:54.275879106 +0000 UTC m=+1236.858350535" watchObservedRunningTime="2025-11-24 11:50:54.276164003 +0000 UTC m=+1236.858635392" Nov 24 11:50:55 crc kubenswrapper[4789]: I1124 11:50:55.238445 4789 generic.go:334] "Generic (PLEG): container finished" podID="1652b281-174f-466f-9b1b-52006fe58620" containerID="e8f19e3fd95b83165b08eba778510e214d3a6a35b811d39e55bd5379b1f626b4" exitCode=0 Nov 24 11:50:55 crc kubenswrapper[4789]: I1124 11:50:55.238511 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1652b281-174f-466f-9b1b-52006fe58620","Type":"ContainerDied","Data":"e8f19e3fd95b83165b08eba778510e214d3a6a35b811d39e55bd5379b1f626b4"} Nov 24 11:50:58 crc kubenswrapper[4789]: I1124 11:50:58.282939 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1652b281-174f-466f-9b1b-52006fe58620","Type":"ContainerStarted","Data":"fca7e4df01044346e9596db744a18201725a6edeb62c617b973cca88af00b140"} Nov 24 11:50:58 crc kubenswrapper[4789]: I1124 11:50:58.286690 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:50:58 crc kubenswrapper[4789]: I1124 11:50:58.317523 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=39.317499721 podStartE2EDuration="39.317499721s" podCreationTimestamp="2025-11-24 11:50:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:50:58.307127937 +0000 UTC m=+1240.889599316" watchObservedRunningTime="2025-11-24 11:50:58.317499721 +0000 UTC m=+1240.899971110" Nov 24 11:51:05 crc kubenswrapper[4789]: I1124 11:51:05.382107 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bbkm6" event={"ID":"9fa6d7a2-c7df-413c-8a31-3d7e76031554","Type":"ContainerStarted","Data":"33712bb8cfcd17e6883419545922403e88a1ec8257e1f7c3c773dae5b2174188"} Nov 24 11:51:05 crc kubenswrapper[4789]: I1124 11:51:05.405745 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bbkm6" podStartSLOduration=2.2454728 podStartE2EDuration="13.405724989s" podCreationTimestamp="2025-11-24 11:50:52 +0000 UTC" firstStartedPulling="2025-11-24 11:50:53.812357986 +0000 UTC m=+1236.394829375" lastFinishedPulling="2025-11-24 11:51:04.972610185 +0000 UTC m=+1247.555081564" observedRunningTime="2025-11-24 11:51:05.403331802 +0000 UTC m=+1247.985803181" watchObservedRunningTime="2025-11-24 11:51:05.405724989 +0000 UTC m=+1247.988196378" Nov 24 11:51:08 crc kubenswrapper[4789]: I1124 11:51:08.197915 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 24 11:51:10 crc kubenswrapper[4789]: I1124 11:51:10.228343 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:51:16 crc kubenswrapper[4789]: I1124 11:51:16.489876 4789 generic.go:334] "Generic (PLEG): container finished" podID="9fa6d7a2-c7df-413c-8a31-3d7e76031554" containerID="33712bb8cfcd17e6883419545922403e88a1ec8257e1f7c3c773dae5b2174188" exitCode=0 Nov 24 11:51:16 crc kubenswrapper[4789]: I1124 11:51:16.490033 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bbkm6" event={"ID":"9fa6d7a2-c7df-413c-8a31-3d7e76031554","Type":"ContainerDied","Data":"33712bb8cfcd17e6883419545922403e88a1ec8257e1f7c3c773dae5b2174188"} Nov 24 11:51:17 crc kubenswrapper[4789]: I1124 11:51:17.932718 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bbkm6" Nov 24 11:51:18 crc kubenswrapper[4789]: I1124 11:51:18.097810 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fa6d7a2-c7df-413c-8a31-3d7e76031554-repo-setup-combined-ca-bundle\") pod \"9fa6d7a2-c7df-413c-8a31-3d7e76031554\" (UID: \"9fa6d7a2-c7df-413c-8a31-3d7e76031554\") " Nov 24 11:51:18 crc kubenswrapper[4789]: I1124 11:51:18.097978 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9fa6d7a2-c7df-413c-8a31-3d7e76031554-inventory\") pod \"9fa6d7a2-c7df-413c-8a31-3d7e76031554\" (UID: \"9fa6d7a2-c7df-413c-8a31-3d7e76031554\") " Nov 24 11:51:18 crc kubenswrapper[4789]: I1124 11:51:18.098020 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9fa6d7a2-c7df-413c-8a31-3d7e76031554-ssh-key\") pod \"9fa6d7a2-c7df-413c-8a31-3d7e76031554\" (UID: \"9fa6d7a2-c7df-413c-8a31-3d7e76031554\") " Nov 24 11:51:18 crc kubenswrapper[4789]: I1124 11:51:18.098034 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnz7x\" (UniqueName: \"kubernetes.io/projected/9fa6d7a2-c7df-413c-8a31-3d7e76031554-kube-api-access-xnz7x\") pod \"9fa6d7a2-c7df-413c-8a31-3d7e76031554\" (UID: \"9fa6d7a2-c7df-413c-8a31-3d7e76031554\") " Nov 24 11:51:18 crc kubenswrapper[4789]: I1124 11:51:18.104683 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fa6d7a2-c7df-413c-8a31-3d7e76031554-kube-api-access-xnz7x" (OuterVolumeSpecName: "kube-api-access-xnz7x") pod "9fa6d7a2-c7df-413c-8a31-3d7e76031554" (UID: "9fa6d7a2-c7df-413c-8a31-3d7e76031554"). InnerVolumeSpecName "kube-api-access-xnz7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:51:18 crc kubenswrapper[4789]: I1124 11:51:18.109589 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa6d7a2-c7df-413c-8a31-3d7e76031554-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "9fa6d7a2-c7df-413c-8a31-3d7e76031554" (UID: "9fa6d7a2-c7df-413c-8a31-3d7e76031554"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:51:18 crc kubenswrapper[4789]: I1124 11:51:18.127438 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa6d7a2-c7df-413c-8a31-3d7e76031554-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9fa6d7a2-c7df-413c-8a31-3d7e76031554" (UID: "9fa6d7a2-c7df-413c-8a31-3d7e76031554"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:51:18 crc kubenswrapper[4789]: I1124 11:51:18.159878 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa6d7a2-c7df-413c-8a31-3d7e76031554-inventory" (OuterVolumeSpecName: "inventory") pod "9fa6d7a2-c7df-413c-8a31-3d7e76031554" (UID: "9fa6d7a2-c7df-413c-8a31-3d7e76031554"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:51:18 crc kubenswrapper[4789]: I1124 11:51:18.200406 4789 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9fa6d7a2-c7df-413c-8a31-3d7e76031554-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:51:18 crc kubenswrapper[4789]: I1124 11:51:18.200470 4789 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9fa6d7a2-c7df-413c-8a31-3d7e76031554-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:51:18 crc kubenswrapper[4789]: I1124 11:51:18.200546 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnz7x\" (UniqueName: \"kubernetes.io/projected/9fa6d7a2-c7df-413c-8a31-3d7e76031554-kube-api-access-xnz7x\") on node \"crc\" DevicePath \"\"" Nov 24 11:51:18 crc kubenswrapper[4789]: I1124 11:51:18.200815 4789 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fa6d7a2-c7df-413c-8a31-3d7e76031554-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:51:18 crc kubenswrapper[4789]: I1124 11:51:18.512937 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bbkm6" event={"ID":"9fa6d7a2-c7df-413c-8a31-3d7e76031554","Type":"ContainerDied","Data":"6eecb42ee026b921584229f6619469b2101ebeef9b530bc0177066579cfc8e1d"} Nov 24 11:51:18 crc kubenswrapper[4789]: I1124 11:51:18.513273 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6eecb42ee026b921584229f6619469b2101ebeef9b530bc0177066579cfc8e1d" Nov 24 11:51:18 crc kubenswrapper[4789]: I1124 11:51:18.512997 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bbkm6" Nov 24 11:51:18 crc kubenswrapper[4789]: I1124 11:51:18.595391 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pccjk"] Nov 24 11:51:18 crc kubenswrapper[4789]: E1124 11:51:18.595828 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fa6d7a2-c7df-413c-8a31-3d7e76031554" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 11:51:18 crc kubenswrapper[4789]: I1124 11:51:18.595848 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fa6d7a2-c7df-413c-8a31-3d7e76031554" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 11:51:18 crc kubenswrapper[4789]: I1124 11:51:18.596013 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fa6d7a2-c7df-413c-8a31-3d7e76031554" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 11:51:18 crc kubenswrapper[4789]: I1124 11:51:18.596636 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pccjk" Nov 24 11:51:18 crc kubenswrapper[4789]: I1124 11:51:18.600798 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:51:18 crc kubenswrapper[4789]: I1124 11:51:18.601309 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:51:18 crc kubenswrapper[4789]: I1124 11:51:18.603343 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:51:18 crc kubenswrapper[4789]: I1124 11:51:18.611900 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lhfjg" Nov 24 11:51:18 crc kubenswrapper[4789]: I1124 11:51:18.630049 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2940969-00db-4677-aaae-5d1d0a25a10a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pccjk\" (UID: \"d2940969-00db-4677-aaae-5d1d0a25a10a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pccjk" Nov 24 11:51:18 crc kubenswrapper[4789]: I1124 11:51:18.630097 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c296l\" (UniqueName: \"kubernetes.io/projected/d2940969-00db-4677-aaae-5d1d0a25a10a-kube-api-access-c296l\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pccjk\" (UID: \"d2940969-00db-4677-aaae-5d1d0a25a10a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pccjk" Nov 24 11:51:18 crc kubenswrapper[4789]: I1124 11:51:18.630125 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d2940969-00db-4677-aaae-5d1d0a25a10a-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pccjk\" (UID: \"d2940969-00db-4677-aaae-5d1d0a25a10a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pccjk" Nov 24 11:51:18 crc kubenswrapper[4789]: I1124 11:51:18.630168 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pccjk"] Nov 24 11:51:18 crc kubenswrapper[4789]: I1124 11:51:18.630789 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2940969-00db-4677-aaae-5d1d0a25a10a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pccjk\" (UID: \"d2940969-00db-4677-aaae-5d1d0a25a10a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pccjk" Nov 24 11:51:18 crc kubenswrapper[4789]: I1124 11:51:18.732147 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2940969-00db-4677-aaae-5d1d0a25a10a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pccjk\" (UID: \"d2940969-00db-4677-aaae-5d1d0a25a10a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pccjk" Nov 24 11:51:18 crc kubenswrapper[4789]: I1124 11:51:18.732215 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2940969-00db-4677-aaae-5d1d0a25a10a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pccjk\" (UID: \"d2940969-00db-4677-aaae-5d1d0a25a10a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pccjk" Nov 24 11:51:18 crc kubenswrapper[4789]: I1124 11:51:18.732244 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c296l\" (UniqueName: \"kubernetes.io/projected/d2940969-00db-4677-aaae-5d1d0a25a10a-kube-api-access-c296l\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pccjk\" (UID: \"d2940969-00db-4677-aaae-5d1d0a25a10a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pccjk" Nov 24 11:51:18 crc kubenswrapper[4789]: I1124 11:51:18.732266 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d2940969-00db-4677-aaae-5d1d0a25a10a-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pccjk\" (UID: \"d2940969-00db-4677-aaae-5d1d0a25a10a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pccjk" Nov 24 11:51:18 crc kubenswrapper[4789]: I1124 11:51:18.736950 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2940969-00db-4677-aaae-5d1d0a25a10a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pccjk\" (UID: \"d2940969-00db-4677-aaae-5d1d0a25a10a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pccjk" Nov 24 11:51:18 crc kubenswrapper[4789]: I1124 11:51:18.737242 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2940969-00db-4677-aaae-5d1d0a25a10a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pccjk\" (UID: \"d2940969-00db-4677-aaae-5d1d0a25a10a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pccjk" Nov 24 11:51:18 crc kubenswrapper[4789]: I1124 11:51:18.740399 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d2940969-00db-4677-aaae-5d1d0a25a10a-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pccjk\" (UID: \"d2940969-00db-4677-aaae-5d1d0a25a10a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pccjk" Nov 24 11:51:18 crc kubenswrapper[4789]: I1124 11:51:18.750969 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c296l\" (UniqueName: \"kubernetes.io/projected/d2940969-00db-4677-aaae-5d1d0a25a10a-kube-api-access-c296l\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pccjk\" (UID: \"d2940969-00db-4677-aaae-5d1d0a25a10a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pccjk" Nov 24 11:51:18 crc kubenswrapper[4789]: I1124 11:51:18.912629 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pccjk" Nov 24 11:51:19 crc kubenswrapper[4789]: I1124 11:51:19.456731 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pccjk"] Nov 24 11:51:19 crc kubenswrapper[4789]: I1124 11:51:19.524822 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pccjk" event={"ID":"d2940969-00db-4677-aaae-5d1d0a25a10a","Type":"ContainerStarted","Data":"0ea614d63fce9aeaf5bc4f1f0f42a9a150ecba0df27ab8212d327d22c2f9373c"} Nov 24 11:51:20 crc kubenswrapper[4789]: I1124 11:51:20.161954 4789 patch_prober.go:28] interesting pod/machine-config-daemon-9czvn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:51:20 crc kubenswrapper[4789]: I1124 11:51:20.162317 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:51:20 crc kubenswrapper[4789]: I1124 11:51:20.545811 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pccjk" event={"ID":"d2940969-00db-4677-aaae-5d1d0a25a10a","Type":"ContainerStarted","Data":"08192a0c80793c391c488f16b993af1e1c049a56c853100df1246ea4de1e8b34"} Nov 24 11:51:20 crc kubenswrapper[4789]: I1124 11:51:20.581376 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pccjk" podStartSLOduration=2.150821789 podStartE2EDuration="2.581351451s" podCreationTimestamp="2025-11-24 11:51:18 +0000 UTC" firstStartedPulling="2025-11-24 11:51:19.495858699 +0000 UTC m=+1262.078330078" lastFinishedPulling="2025-11-24 11:51:19.926388361 +0000 UTC m=+1262.508859740" observedRunningTime="2025-11-24 11:51:20.580065521 +0000 UTC m=+1263.162536910" watchObservedRunningTime="2025-11-24 11:51:20.581351451 +0000 UTC m=+1263.163822870" Nov 24 11:51:28 crc kubenswrapper[4789]: I1124 11:51:28.462292 4789 scope.go:117] "RemoveContainer" containerID="9a28c3039c74fe442ed3bbd247f272af8ce6498883c5cf3377a5ba815e084551" Nov 24 11:51:50 crc kubenswrapper[4789]: I1124 11:51:50.162972 4789 patch_prober.go:28] interesting pod/machine-config-daemon-9czvn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:51:50 crc kubenswrapper[4789]: I1124 11:51:50.163538 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:51:50 crc kubenswrapper[4789]: I1124 11:51:50.163594 4789 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" Nov 24 11:51:50 crc kubenswrapper[4789]: I1124 11:51:50.164782 4789 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a7f4024a35602eb88a760e42e4dc78156ab6feb43e0ae706700d1e332b76e45c"} pod="openshift-machine-config-operator/machine-config-daemon-9czvn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 11:51:50 crc kubenswrapper[4789]: I1124 11:51:50.164890 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" containerID="cri-o://a7f4024a35602eb88a760e42e4dc78156ab6feb43e0ae706700d1e332b76e45c" gracePeriod=600 Nov 24 11:51:50 crc kubenswrapper[4789]: I1124 11:51:50.855517 4789 generic.go:334] "Generic (PLEG): container finished" podID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerID="a7f4024a35602eb88a760e42e4dc78156ab6feb43e0ae706700d1e332b76e45c" exitCode=0 Nov 24 11:51:50 crc kubenswrapper[4789]: I1124 11:51:50.855612 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" event={"ID":"30c4a832-f0e4-481b-a474-3ecea86049f6","Type":"ContainerDied","Data":"a7f4024a35602eb88a760e42e4dc78156ab6feb43e0ae706700d1e332b76e45c"} Nov 24 11:51:50 crc kubenswrapper[4789]: I1124 11:51:50.856090 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" event={"ID":"30c4a832-f0e4-481b-a474-3ecea86049f6","Type":"ContainerStarted","Data":"35c18d54a6d963863f1131173b65be0814f48cc37a6950d4c230cb7fa15e65d4"} Nov 24 11:51:50 crc kubenswrapper[4789]: I1124 11:51:50.856108 4789 scope.go:117] "RemoveContainer" containerID="f3cea7aef07d9136d7cecc4814ad70b6e4b4a4c56940366aabbc6b2f1bc56ebf" Nov 24 11:52:28 crc kubenswrapper[4789]: I1124 11:52:28.535739 4789 scope.go:117] "RemoveContainer" containerID="3865617a9e3d9f4a6c335d0b89d7ca697efada950648d2bace3fc1c19a4236c9" Nov 24 11:52:28 crc kubenswrapper[4789]: I1124 11:52:28.587386 4789 scope.go:117] "RemoveContainer" containerID="189900dc95c48e8a3e902afa5bfccbfac9e8012793dfb430113a563c463e6eb9" Nov 24 11:53:28 crc kubenswrapper[4789]: I1124 11:53:28.707147 4789 scope.go:117] "RemoveContainer" containerID="4468ec69241d242d18355eabb44c8175ffd56094d1c1619620fa8455b26ad737" Nov 24 11:53:47 crc kubenswrapper[4789]: I1124 11:53:47.495217 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nm5kp"] Nov 24 11:53:47 crc kubenswrapper[4789]: I1124 11:53:47.498471 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nm5kp" Nov 24 11:53:47 crc kubenswrapper[4789]: I1124 11:53:47.517808 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nm5kp"] Nov 24 11:53:47 crc kubenswrapper[4789]: I1124 11:53:47.553744 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17e264ef-9668-45ca-81fd-13a8fc192716-catalog-content\") pod \"redhat-marketplace-nm5kp\" (UID: \"17e264ef-9668-45ca-81fd-13a8fc192716\") " pod="openshift-marketplace/redhat-marketplace-nm5kp" Nov 24 11:53:47 crc kubenswrapper[4789]: I1124 11:53:47.553903 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mnl8\" (UniqueName: \"kubernetes.io/projected/17e264ef-9668-45ca-81fd-13a8fc192716-kube-api-access-9mnl8\") pod \"redhat-marketplace-nm5kp\" (UID: \"17e264ef-9668-45ca-81fd-13a8fc192716\") " pod="openshift-marketplace/redhat-marketplace-nm5kp" Nov 24 11:53:47 crc kubenswrapper[4789]: I1124 11:53:47.553994 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17e264ef-9668-45ca-81fd-13a8fc192716-utilities\") pod \"redhat-marketplace-nm5kp\" (UID: \"17e264ef-9668-45ca-81fd-13a8fc192716\") " pod="openshift-marketplace/redhat-marketplace-nm5kp" Nov 24 11:53:47 crc kubenswrapper[4789]: I1124 11:53:47.655627 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mnl8\" (UniqueName: \"kubernetes.io/projected/17e264ef-9668-45ca-81fd-13a8fc192716-kube-api-access-9mnl8\") pod \"redhat-marketplace-nm5kp\" (UID: \"17e264ef-9668-45ca-81fd-13a8fc192716\") " pod="openshift-marketplace/redhat-marketplace-nm5kp" Nov 24 11:53:47 crc kubenswrapper[4789]: I1124 11:53:47.656015 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17e264ef-9668-45ca-81fd-13a8fc192716-utilities\") pod \"redhat-marketplace-nm5kp\" (UID: \"17e264ef-9668-45ca-81fd-13a8fc192716\") " pod="openshift-marketplace/redhat-marketplace-nm5kp" Nov 24 11:53:47 crc kubenswrapper[4789]: I1124 11:53:47.656085 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17e264ef-9668-45ca-81fd-13a8fc192716-catalog-content\") pod \"redhat-marketplace-nm5kp\" (UID: \"17e264ef-9668-45ca-81fd-13a8fc192716\") " pod="openshift-marketplace/redhat-marketplace-nm5kp" Nov 24 11:53:47 crc kubenswrapper[4789]: I1124 11:53:47.656594 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17e264ef-9668-45ca-81fd-13a8fc192716-utilities\") pod \"redhat-marketplace-nm5kp\" (UID: \"17e264ef-9668-45ca-81fd-13a8fc192716\") " pod="openshift-marketplace/redhat-marketplace-nm5kp" Nov 24 11:53:47 crc kubenswrapper[4789]: I1124 11:53:47.656686 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17e264ef-9668-45ca-81fd-13a8fc192716-catalog-content\") pod \"redhat-marketplace-nm5kp\" (UID: \"17e264ef-9668-45ca-81fd-13a8fc192716\") " pod="openshift-marketplace/redhat-marketplace-nm5kp" Nov 24 11:53:47 crc kubenswrapper[4789]: I1124 11:53:47.680078 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mnl8\" (UniqueName: \"kubernetes.io/projected/17e264ef-9668-45ca-81fd-13a8fc192716-kube-api-access-9mnl8\") pod \"redhat-marketplace-nm5kp\" (UID: \"17e264ef-9668-45ca-81fd-13a8fc192716\") " pod="openshift-marketplace/redhat-marketplace-nm5kp" Nov 24 11:53:47 crc kubenswrapper[4789]: I1124 11:53:47.818765 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nm5kp" Nov 24 11:53:48 crc kubenswrapper[4789]: I1124 11:53:48.271592 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nm5kp"] Nov 24 11:53:48 crc kubenswrapper[4789]: W1124 11:53:48.277632 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17e264ef_9668_45ca_81fd_13a8fc192716.slice/crio-ac83491a61c254973e2c5e2c83d627163424805d1a31ff5fb55a2511189f53aa WatchSource:0}: Error finding container ac83491a61c254973e2c5e2c83d627163424805d1a31ff5fb55a2511189f53aa: Status 404 returned error can't find the container with id ac83491a61c254973e2c5e2c83d627163424805d1a31ff5fb55a2511189f53aa Nov 24 11:53:48 crc kubenswrapper[4789]: I1124 11:53:48.963503 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nm5kp" event={"ID":"17e264ef-9668-45ca-81fd-13a8fc192716","Type":"ContainerDied","Data":"96611c904983774e60fd20f2f37b3cebe1610ba4d699ae01afd466a290174ff0"} Nov 24 11:53:48 crc kubenswrapper[4789]: I1124 11:53:48.963394 4789 generic.go:334] "Generic (PLEG): container finished" podID="17e264ef-9668-45ca-81fd-13a8fc192716" containerID="96611c904983774e60fd20f2f37b3cebe1610ba4d699ae01afd466a290174ff0" exitCode=0 Nov 24 11:53:48 crc kubenswrapper[4789]: I1124 11:53:48.964854 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nm5kp" event={"ID":"17e264ef-9668-45ca-81fd-13a8fc192716","Type":"ContainerStarted","Data":"ac83491a61c254973e2c5e2c83d627163424805d1a31ff5fb55a2511189f53aa"} Nov 24 11:53:49 crc kubenswrapper[4789]: I1124 11:53:49.973785 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nm5kp" event={"ID":"17e264ef-9668-45ca-81fd-13a8fc192716","Type":"ContainerStarted","Data":"30684a00e55251e58ea66cf210e1497f19709b4747a04b4dae72dd8413a65280"} Nov 24 11:53:50 crc kubenswrapper[4789]: I1124 11:53:50.162601 4789 patch_prober.go:28] interesting pod/machine-config-daemon-9czvn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:53:50 crc kubenswrapper[4789]: I1124 11:53:50.162715 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:53:50 crc kubenswrapper[4789]: I1124 11:53:50.982452 4789 generic.go:334] "Generic (PLEG): container finished" podID="17e264ef-9668-45ca-81fd-13a8fc192716" containerID="30684a00e55251e58ea66cf210e1497f19709b4747a04b4dae72dd8413a65280" exitCode=0 Nov 24 11:53:50 crc kubenswrapper[4789]: I1124 11:53:50.982513 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nm5kp" event={"ID":"17e264ef-9668-45ca-81fd-13a8fc192716","Type":"ContainerDied","Data":"30684a00e55251e58ea66cf210e1497f19709b4747a04b4dae72dd8413a65280"} Nov 24 11:53:51 crc kubenswrapper[4789]: I1124 11:53:51.996557 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nm5kp" event={"ID":"17e264ef-9668-45ca-81fd-13a8fc192716","Type":"ContainerStarted","Data":"1d454f82f628727d17d8f4ce1652df3c31b81dc0e051d6fa999c457e077ecc44"} Nov 24 11:53:52 crc kubenswrapper[4789]: I1124 11:53:52.022488 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nm5kp" podStartSLOduration=2.593197575 podStartE2EDuration="5.022449639s" podCreationTimestamp="2025-11-24 11:53:47 +0000 UTC" firstStartedPulling="2025-11-24 11:53:48.965943914 +0000 UTC m=+1411.548415293" lastFinishedPulling="2025-11-24 11:53:51.395195978 +0000 UTC m=+1413.977667357" observedRunningTime="2025-11-24 11:53:52.020895243 +0000 UTC m=+1414.603366632" watchObservedRunningTime="2025-11-24 11:53:52.022449639 +0000 UTC m=+1414.604921018" Nov 24 11:53:57 crc kubenswrapper[4789]: I1124 11:53:57.819758 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nm5kp" Nov 24 11:53:57 crc kubenswrapper[4789]: I1124 11:53:57.821477 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nm5kp" Nov 24 11:53:57 crc kubenswrapper[4789]: I1124 11:53:57.879145 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nm5kp" Nov 24 11:53:58 crc kubenswrapper[4789]: I1124 11:53:58.099870 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nm5kp" Nov 24 11:53:58 crc kubenswrapper[4789]: I1124 11:53:58.157708 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nm5kp"] Nov 24 11:54:00 crc kubenswrapper[4789]: I1124 11:54:00.069726 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nm5kp" podUID="17e264ef-9668-45ca-81fd-13a8fc192716" containerName="registry-server" containerID="cri-o://1d454f82f628727d17d8f4ce1652df3c31b81dc0e051d6fa999c457e077ecc44" gracePeriod=2 Nov 24 11:54:00 crc kubenswrapper[4789]: I1124 11:54:00.563171 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nm5kp" Nov 24 11:54:00 crc kubenswrapper[4789]: I1124 11:54:00.619709 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17e264ef-9668-45ca-81fd-13a8fc192716-catalog-content\") pod \"17e264ef-9668-45ca-81fd-13a8fc192716\" (UID: \"17e264ef-9668-45ca-81fd-13a8fc192716\") " Nov 24 11:54:00 crc kubenswrapper[4789]: I1124 11:54:00.619757 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mnl8\" (UniqueName: \"kubernetes.io/projected/17e264ef-9668-45ca-81fd-13a8fc192716-kube-api-access-9mnl8\") pod \"17e264ef-9668-45ca-81fd-13a8fc192716\" (UID: \"17e264ef-9668-45ca-81fd-13a8fc192716\") " Nov 24 11:54:00 crc kubenswrapper[4789]: I1124 11:54:00.619796 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17e264ef-9668-45ca-81fd-13a8fc192716-utilities\") pod \"17e264ef-9668-45ca-81fd-13a8fc192716\" (UID: \"17e264ef-9668-45ca-81fd-13a8fc192716\") " Nov 24 11:54:00 crc kubenswrapper[4789]: I1124 11:54:00.621276 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17e264ef-9668-45ca-81fd-13a8fc192716-utilities" (OuterVolumeSpecName: "utilities") pod "17e264ef-9668-45ca-81fd-13a8fc192716" (UID: "17e264ef-9668-45ca-81fd-13a8fc192716"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:54:00 crc kubenswrapper[4789]: I1124 11:54:00.634691 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17e264ef-9668-45ca-81fd-13a8fc192716-kube-api-access-9mnl8" (OuterVolumeSpecName: "kube-api-access-9mnl8") pod "17e264ef-9668-45ca-81fd-13a8fc192716" (UID: "17e264ef-9668-45ca-81fd-13a8fc192716"). InnerVolumeSpecName "kube-api-access-9mnl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:54:00 crc kubenswrapper[4789]: I1124 11:54:00.641032 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17e264ef-9668-45ca-81fd-13a8fc192716-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "17e264ef-9668-45ca-81fd-13a8fc192716" (UID: "17e264ef-9668-45ca-81fd-13a8fc192716"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:54:00 crc kubenswrapper[4789]: I1124 11:54:00.722059 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17e264ef-9668-45ca-81fd-13a8fc192716-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:54:00 crc kubenswrapper[4789]: I1124 11:54:00.722096 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17e264ef-9668-45ca-81fd-13a8fc192716-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:54:00 crc kubenswrapper[4789]: I1124 11:54:00.722108 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mnl8\" (UniqueName: \"kubernetes.io/projected/17e264ef-9668-45ca-81fd-13a8fc192716-kube-api-access-9mnl8\") on node \"crc\" DevicePath \"\"" Nov 24 11:54:01 crc kubenswrapper[4789]: I1124 11:54:01.077786 4789 generic.go:334] "Generic (PLEG): container finished" podID="17e264ef-9668-45ca-81fd-13a8fc192716" containerID="1d454f82f628727d17d8f4ce1652df3c31b81dc0e051d6fa999c457e077ecc44" exitCode=0 Nov 24 11:54:01 crc kubenswrapper[4789]: I1124 11:54:01.077824 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nm5kp" event={"ID":"17e264ef-9668-45ca-81fd-13a8fc192716","Type":"ContainerDied","Data":"1d454f82f628727d17d8f4ce1652df3c31b81dc0e051d6fa999c457e077ecc44"} Nov 24 11:54:01 crc kubenswrapper[4789]: I1124 11:54:01.077848 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nm5kp" event={"ID":"17e264ef-9668-45ca-81fd-13a8fc192716","Type":"ContainerDied","Data":"ac83491a61c254973e2c5e2c83d627163424805d1a31ff5fb55a2511189f53aa"} Nov 24 11:54:01 crc kubenswrapper[4789]: I1124 11:54:01.077864 4789 scope.go:117] "RemoveContainer" containerID="1d454f82f628727d17d8f4ce1652df3c31b81dc0e051d6fa999c457e077ecc44" Nov 24 11:54:01 crc kubenswrapper[4789]: I1124 11:54:01.077883 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nm5kp" Nov 24 11:54:01 crc kubenswrapper[4789]: I1124 11:54:01.099655 4789 scope.go:117] "RemoveContainer" containerID="30684a00e55251e58ea66cf210e1497f19709b4747a04b4dae72dd8413a65280" Nov 24 11:54:01 crc kubenswrapper[4789]: I1124 11:54:01.116548 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nm5kp"] Nov 24 11:54:01 crc kubenswrapper[4789]: I1124 11:54:01.133063 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nm5kp"] Nov 24 11:54:01 crc kubenswrapper[4789]: I1124 11:54:01.148843 4789 scope.go:117] "RemoveContainer" containerID="96611c904983774e60fd20f2f37b3cebe1610ba4d699ae01afd466a290174ff0" Nov 24 11:54:01 crc kubenswrapper[4789]: I1124 11:54:01.169874 4789 scope.go:117] "RemoveContainer" containerID="1d454f82f628727d17d8f4ce1652df3c31b81dc0e051d6fa999c457e077ecc44" Nov 24 11:54:01 crc kubenswrapper[4789]: E1124 11:54:01.170280 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d454f82f628727d17d8f4ce1652df3c31b81dc0e051d6fa999c457e077ecc44\": container with ID starting with 1d454f82f628727d17d8f4ce1652df3c31b81dc0e051d6fa999c457e077ecc44 not found: ID does not exist" containerID="1d454f82f628727d17d8f4ce1652df3c31b81dc0e051d6fa999c457e077ecc44" Nov 24 11:54:01 crc kubenswrapper[4789]: I1124 11:54:01.170329 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d454f82f628727d17d8f4ce1652df3c31b81dc0e051d6fa999c457e077ecc44"} err="failed to get container status \"1d454f82f628727d17d8f4ce1652df3c31b81dc0e051d6fa999c457e077ecc44\": rpc error: code = NotFound desc = could not find container \"1d454f82f628727d17d8f4ce1652df3c31b81dc0e051d6fa999c457e077ecc44\": container with ID starting with 1d454f82f628727d17d8f4ce1652df3c31b81dc0e051d6fa999c457e077ecc44 not found: ID does not exist" Nov 24 11:54:01 crc kubenswrapper[4789]: I1124 11:54:01.170361 4789 scope.go:117] "RemoveContainer" containerID="30684a00e55251e58ea66cf210e1497f19709b4747a04b4dae72dd8413a65280" Nov 24 11:54:01 crc kubenswrapper[4789]: E1124 11:54:01.170815 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30684a00e55251e58ea66cf210e1497f19709b4747a04b4dae72dd8413a65280\": container with ID starting with 30684a00e55251e58ea66cf210e1497f19709b4747a04b4dae72dd8413a65280 not found: ID does not exist" containerID="30684a00e55251e58ea66cf210e1497f19709b4747a04b4dae72dd8413a65280" Nov 24 11:54:01 crc kubenswrapper[4789]: I1124 11:54:01.170847 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30684a00e55251e58ea66cf210e1497f19709b4747a04b4dae72dd8413a65280"} err="failed to get container status \"30684a00e55251e58ea66cf210e1497f19709b4747a04b4dae72dd8413a65280\": rpc error: code = NotFound desc = could not find container \"30684a00e55251e58ea66cf210e1497f19709b4747a04b4dae72dd8413a65280\": container with ID starting with 30684a00e55251e58ea66cf210e1497f19709b4747a04b4dae72dd8413a65280 not found: ID does not exist" Nov 24 11:54:01 crc kubenswrapper[4789]: I1124 11:54:01.170866 4789 scope.go:117] "RemoveContainer" containerID="96611c904983774e60fd20f2f37b3cebe1610ba4d699ae01afd466a290174ff0" Nov 24 11:54:01 crc kubenswrapper[4789]: E1124 11:54:01.171114 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96611c904983774e60fd20f2f37b3cebe1610ba4d699ae01afd466a290174ff0\": container with ID starting with 96611c904983774e60fd20f2f37b3cebe1610ba4d699ae01afd466a290174ff0 not found: ID does not exist" containerID="96611c904983774e60fd20f2f37b3cebe1610ba4d699ae01afd466a290174ff0" Nov 24 11:54:01 crc kubenswrapper[4789]: I1124 11:54:01.171139 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96611c904983774e60fd20f2f37b3cebe1610ba4d699ae01afd466a290174ff0"} err="failed to get container status \"96611c904983774e60fd20f2f37b3cebe1610ba4d699ae01afd466a290174ff0\": rpc error: code = NotFound desc = could not find container \"96611c904983774e60fd20f2f37b3cebe1610ba4d699ae01afd466a290174ff0\": container with ID starting with 96611c904983774e60fd20f2f37b3cebe1610ba4d699ae01afd466a290174ff0 not found: ID does not exist" Nov 24 11:54:02 crc kubenswrapper[4789]: I1124 11:54:02.182241 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17e264ef-9668-45ca-81fd-13a8fc192716" path="/var/lib/kubelet/pods/17e264ef-9668-45ca-81fd-13a8fc192716/volumes" Nov 24 11:54:20 crc kubenswrapper[4789]: I1124 11:54:20.161934 4789 patch_prober.go:28] interesting pod/machine-config-daemon-9czvn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:54:20 crc kubenswrapper[4789]: I1124 11:54:20.162676 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:54:20 crc kubenswrapper[4789]: I1124 11:54:20.399868 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ttzds"] Nov 24 11:54:20 crc kubenswrapper[4789]: E1124 11:54:20.400230 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17e264ef-9668-45ca-81fd-13a8fc192716" containerName="registry-server" Nov 24 11:54:20 crc kubenswrapper[4789]: I1124 11:54:20.400249 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="17e264ef-9668-45ca-81fd-13a8fc192716" containerName="registry-server" Nov 24 11:54:20 crc kubenswrapper[4789]: E1124 11:54:20.400290 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17e264ef-9668-45ca-81fd-13a8fc192716" containerName="extract-utilities" Nov 24 11:54:20 crc kubenswrapper[4789]: I1124 11:54:20.400299 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="17e264ef-9668-45ca-81fd-13a8fc192716" containerName="extract-utilities" Nov 24 11:54:20 crc kubenswrapper[4789]: E1124 11:54:20.400325 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17e264ef-9668-45ca-81fd-13a8fc192716" containerName="extract-content" Nov 24 11:54:20 crc kubenswrapper[4789]: I1124 11:54:20.400333 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="17e264ef-9668-45ca-81fd-13a8fc192716" containerName="extract-content" Nov 24 11:54:20 crc kubenswrapper[4789]: I1124 11:54:20.400557 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="17e264ef-9668-45ca-81fd-13a8fc192716" containerName="registry-server" Nov 24 11:54:20 crc kubenswrapper[4789]: I1124 11:54:20.401807 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ttzds" Nov 24 11:54:20 crc kubenswrapper[4789]: I1124 11:54:20.435859 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ttzds"] Nov 24 11:54:20 crc kubenswrapper[4789]: I1124 11:54:20.590073 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a68178ee-eb32-4c58-b08c-ad7b2d2aefce-utilities\") pod \"redhat-operators-ttzds\" (UID: \"a68178ee-eb32-4c58-b08c-ad7b2d2aefce\") " pod="openshift-marketplace/redhat-operators-ttzds" Nov 24 11:54:20 crc kubenswrapper[4789]: I1124 11:54:20.590161 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsjvj\" (UniqueName: \"kubernetes.io/projected/a68178ee-eb32-4c58-b08c-ad7b2d2aefce-kube-api-access-bsjvj\") pod \"redhat-operators-ttzds\" (UID: \"a68178ee-eb32-4c58-b08c-ad7b2d2aefce\") " pod="openshift-marketplace/redhat-operators-ttzds" Nov 24 11:54:20 crc kubenswrapper[4789]: I1124 11:54:20.590252 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a68178ee-eb32-4c58-b08c-ad7b2d2aefce-catalog-content\") pod \"redhat-operators-ttzds\" (UID: \"a68178ee-eb32-4c58-b08c-ad7b2d2aefce\") " pod="openshift-marketplace/redhat-operators-ttzds" Nov 24 11:54:20 crc kubenswrapper[4789]: I1124 11:54:20.692128 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a68178ee-eb32-4c58-b08c-ad7b2d2aefce-utilities\") pod \"redhat-operators-ttzds\" (UID: \"a68178ee-eb32-4c58-b08c-ad7b2d2aefce\") " pod="openshift-marketplace/redhat-operators-ttzds" Nov 24 11:54:20 crc kubenswrapper[4789]: I1124 11:54:20.692215 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsjvj\" (UniqueName: \"kubernetes.io/projected/a68178ee-eb32-4c58-b08c-ad7b2d2aefce-kube-api-access-bsjvj\") pod \"redhat-operators-ttzds\" (UID: \"a68178ee-eb32-4c58-b08c-ad7b2d2aefce\") " pod="openshift-marketplace/redhat-operators-ttzds" Nov 24 11:54:20 crc kubenswrapper[4789]: I1124 11:54:20.692290 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a68178ee-eb32-4c58-b08c-ad7b2d2aefce-catalog-content\") pod \"redhat-operators-ttzds\" (UID: \"a68178ee-eb32-4c58-b08c-ad7b2d2aefce\") " pod="openshift-marketplace/redhat-operators-ttzds" Nov 24 11:54:20 crc kubenswrapper[4789]: I1124 11:54:20.692774 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a68178ee-eb32-4c58-b08c-ad7b2d2aefce-catalog-content\") pod \"redhat-operators-ttzds\" (UID: \"a68178ee-eb32-4c58-b08c-ad7b2d2aefce\") " pod="openshift-marketplace/redhat-operators-ttzds" Nov 24 11:54:20 crc kubenswrapper[4789]: I1124 11:54:20.693068 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a68178ee-eb32-4c58-b08c-ad7b2d2aefce-utilities\") pod \"redhat-operators-ttzds\" (UID: \"a68178ee-eb32-4c58-b08c-ad7b2d2aefce\") " pod="openshift-marketplace/redhat-operators-ttzds" Nov 24 11:54:20 crc kubenswrapper[4789]: I1124 11:54:20.717173 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsjvj\" (UniqueName: \"kubernetes.io/projected/a68178ee-eb32-4c58-b08c-ad7b2d2aefce-kube-api-access-bsjvj\") pod \"redhat-operators-ttzds\" (UID: \"a68178ee-eb32-4c58-b08c-ad7b2d2aefce\") " pod="openshift-marketplace/redhat-operators-ttzds" Nov 24 11:54:20 crc kubenswrapper[4789]: I1124 11:54:20.720661 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ttzds" Nov 24 11:54:21 crc kubenswrapper[4789]: I1124 11:54:21.264928 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ttzds"] Nov 24 11:54:21 crc kubenswrapper[4789]: I1124 11:54:21.292983 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ttzds" event={"ID":"a68178ee-eb32-4c58-b08c-ad7b2d2aefce","Type":"ContainerStarted","Data":"f1e28e11a39fecc3ca33ab4f2610baf371cf4a8cab8efdfd70f80899e83cab0b"} Nov 24 11:54:22 crc kubenswrapper[4789]: I1124 11:54:22.302408 4789 generic.go:334] "Generic (PLEG): container finished" podID="a68178ee-eb32-4c58-b08c-ad7b2d2aefce" containerID="10b25f6b884367747ceb9363cb56dd5d095734f415a66d89479bc2bbf237ab8b" exitCode=0 Nov 24 11:54:22 crc kubenswrapper[4789]: I1124 11:54:22.302509 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ttzds" event={"ID":"a68178ee-eb32-4c58-b08c-ad7b2d2aefce","Type":"ContainerDied","Data":"10b25f6b884367747ceb9363cb56dd5d095734f415a66d89479bc2bbf237ab8b"} Nov 24 11:54:33 crc kubenswrapper[4789]: I1124 11:54:33.432829 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ttzds" event={"ID":"a68178ee-eb32-4c58-b08c-ad7b2d2aefce","Type":"ContainerStarted","Data":"0acbfd70dc2087b3ddcd0f1758c747d49c1d6053fd999195f46cad281b2dc8ee"} Nov 24 11:54:34 crc kubenswrapper[4789]: I1124 11:54:34.455258 4789 generic.go:334] "Generic (PLEG): container finished" podID="a68178ee-eb32-4c58-b08c-ad7b2d2aefce" containerID="0acbfd70dc2087b3ddcd0f1758c747d49c1d6053fd999195f46cad281b2dc8ee" exitCode=0 Nov 24 11:54:34 crc kubenswrapper[4789]: I1124 11:54:34.455539 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ttzds" event={"ID":"a68178ee-eb32-4c58-b08c-ad7b2d2aefce","Type":"ContainerDied","Data":"0acbfd70dc2087b3ddcd0f1758c747d49c1d6053fd999195f46cad281b2dc8ee"} Nov 24 11:54:35 crc kubenswrapper[4789]: I1124 11:54:35.467075 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ttzds" event={"ID":"a68178ee-eb32-4c58-b08c-ad7b2d2aefce","Type":"ContainerStarted","Data":"9f37a063c3c81e8a10a6ae38c5cafc1935b3b32bb49afde6231432427389a6ff"} Nov 24 11:54:40 crc kubenswrapper[4789]: I1124 11:54:40.513840 4789 generic.go:334] "Generic (PLEG): container finished" podID="d2940969-00db-4677-aaae-5d1d0a25a10a" containerID="08192a0c80793c391c488f16b993af1e1c049a56c853100df1246ea4de1e8b34" exitCode=0 Nov 24 11:54:40 crc kubenswrapper[4789]: I1124 11:54:40.513909 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pccjk" event={"ID":"d2940969-00db-4677-aaae-5d1d0a25a10a","Type":"ContainerDied","Data":"08192a0c80793c391c488f16b993af1e1c049a56c853100df1246ea4de1e8b34"} Nov 24 11:54:40 crc kubenswrapper[4789]: I1124 11:54:40.542729 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ttzds" podStartSLOduration=7.814903131 podStartE2EDuration="20.54271291s" podCreationTimestamp="2025-11-24 11:54:20 +0000 UTC" firstStartedPulling="2025-11-24 11:54:22.304238438 +0000 UTC m=+1444.886709807" lastFinishedPulling="2025-11-24 11:54:35.032048187 +0000 UTC m=+1457.614519586" observedRunningTime="2025-11-24 11:54:35.495751799 +0000 UTC m=+1458.078223208" watchObservedRunningTime="2025-11-24 11:54:40.54271291 +0000 UTC m=+1463.125184289" Nov 24 11:54:40 crc kubenswrapper[4789]: I1124 11:54:40.721547 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ttzds" Nov 24 11:54:40 crc kubenswrapper[4789]: I1124 11:54:40.721951 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ttzds" Nov 24 11:54:41 crc kubenswrapper[4789]: I1124 11:54:41.780871 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ttzds" podUID="a68178ee-eb32-4c58-b08c-ad7b2d2aefce" containerName="registry-server" probeResult="failure" output=< Nov 24 11:54:41 crc kubenswrapper[4789]: timeout: failed to connect service ":50051" within 1s Nov 24 11:54:41 crc kubenswrapper[4789]: > Nov 24 11:54:41 crc kubenswrapper[4789]: I1124 11:54:41.938224 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pccjk" Nov 24 11:54:42 crc kubenswrapper[4789]: I1124 11:54:42.043228 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d2940969-00db-4677-aaae-5d1d0a25a10a-ssh-key\") pod \"d2940969-00db-4677-aaae-5d1d0a25a10a\" (UID: \"d2940969-00db-4677-aaae-5d1d0a25a10a\") " Nov 24 11:54:42 crc kubenswrapper[4789]: I1124 11:54:42.043294 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2940969-00db-4677-aaae-5d1d0a25a10a-inventory\") pod \"d2940969-00db-4677-aaae-5d1d0a25a10a\" (UID: \"d2940969-00db-4677-aaae-5d1d0a25a10a\") " Nov 24 11:54:42 crc kubenswrapper[4789]: I1124 11:54:42.043345 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c296l\" (UniqueName: \"kubernetes.io/projected/d2940969-00db-4677-aaae-5d1d0a25a10a-kube-api-access-c296l\") pod \"d2940969-00db-4677-aaae-5d1d0a25a10a\" (UID: \"d2940969-00db-4677-aaae-5d1d0a25a10a\") " Nov 24 11:54:42 crc kubenswrapper[4789]: I1124 11:54:42.043431 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2940969-00db-4677-aaae-5d1d0a25a10a-bootstrap-combined-ca-bundle\") pod \"d2940969-00db-4677-aaae-5d1d0a25a10a\" (UID: \"d2940969-00db-4677-aaae-5d1d0a25a10a\") " Nov 24 11:54:42 crc kubenswrapper[4789]: I1124 11:54:42.049169 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2940969-00db-4677-aaae-5d1d0a25a10a-kube-api-access-c296l" (OuterVolumeSpecName: "kube-api-access-c296l") pod "d2940969-00db-4677-aaae-5d1d0a25a10a" (UID: "d2940969-00db-4677-aaae-5d1d0a25a10a"). InnerVolumeSpecName "kube-api-access-c296l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:54:42 crc kubenswrapper[4789]: I1124 11:54:42.051632 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2940969-00db-4677-aaae-5d1d0a25a10a-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "d2940969-00db-4677-aaae-5d1d0a25a10a" (UID: "d2940969-00db-4677-aaae-5d1d0a25a10a"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:54:42 crc kubenswrapper[4789]: I1124 11:54:42.069342 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2940969-00db-4677-aaae-5d1d0a25a10a-inventory" (OuterVolumeSpecName: "inventory") pod "d2940969-00db-4677-aaae-5d1d0a25a10a" (UID: "d2940969-00db-4677-aaae-5d1d0a25a10a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:54:42 crc kubenswrapper[4789]: I1124 11:54:42.077481 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2940969-00db-4677-aaae-5d1d0a25a10a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d2940969-00db-4677-aaae-5d1d0a25a10a" (UID: "d2940969-00db-4677-aaae-5d1d0a25a10a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:54:42 crc kubenswrapper[4789]: I1124 11:54:42.145250 4789 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2940969-00db-4677-aaae-5d1d0a25a10a-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:54:42 crc kubenswrapper[4789]: I1124 11:54:42.145618 4789 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d2940969-00db-4677-aaae-5d1d0a25a10a-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:54:42 crc kubenswrapper[4789]: I1124 11:54:42.145718 4789 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2940969-00db-4677-aaae-5d1d0a25a10a-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:54:42 crc kubenswrapper[4789]: I1124 11:54:42.145880 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c296l\" (UniqueName: \"kubernetes.io/projected/d2940969-00db-4677-aaae-5d1d0a25a10a-kube-api-access-c296l\") on node \"crc\" DevicePath \"\"" Nov 24 11:54:42 crc kubenswrapper[4789]: I1124 11:54:42.534432 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pccjk" event={"ID":"d2940969-00db-4677-aaae-5d1d0a25a10a","Type":"ContainerDied","Data":"0ea614d63fce9aeaf5bc4f1f0f42a9a150ecba0df27ab8212d327d22c2f9373c"} Nov 24 11:54:42 crc kubenswrapper[4789]: I1124 11:54:42.534491 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pccjk" Nov 24 11:54:42 crc kubenswrapper[4789]: I1124 11:54:42.534495 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ea614d63fce9aeaf5bc4f1f0f42a9a150ecba0df27ab8212d327d22c2f9373c" Nov 24 11:54:42 crc kubenswrapper[4789]: I1124 11:54:42.613629 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j8vsn"] Nov 24 11:54:42 crc kubenswrapper[4789]: E1124 11:54:42.614103 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2940969-00db-4677-aaae-5d1d0a25a10a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 11:54:42 crc kubenswrapper[4789]: I1124 11:54:42.614129 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2940969-00db-4677-aaae-5d1d0a25a10a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 11:54:42 crc kubenswrapper[4789]: I1124 11:54:42.614358 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2940969-00db-4677-aaae-5d1d0a25a10a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 11:54:42 crc kubenswrapper[4789]: I1124 11:54:42.615146 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j8vsn" Nov 24 11:54:42 crc kubenswrapper[4789]: I1124 11:54:42.616987 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:54:42 crc kubenswrapper[4789]: I1124 11:54:42.618299 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:54:42 crc kubenswrapper[4789]: I1124 11:54:42.618599 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:54:42 crc kubenswrapper[4789]: I1124 11:54:42.618788 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lhfjg" Nov 24 11:54:42 crc kubenswrapper[4789]: I1124 11:54:42.627392 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j8vsn"] Nov 24 11:54:42 crc kubenswrapper[4789]: I1124 11:54:42.757106 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9vmk\" (UniqueName: \"kubernetes.io/projected/34b9fe12-ae2c-4754-bf4a-4ab29c45f336-kube-api-access-j9vmk\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-j8vsn\" (UID: \"34b9fe12-ae2c-4754-bf4a-4ab29c45f336\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j8vsn" Nov 24 11:54:42 crc kubenswrapper[4789]: I1124 11:54:42.757441 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/34b9fe12-ae2c-4754-bf4a-4ab29c45f336-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-j8vsn\" (UID: \"34b9fe12-ae2c-4754-bf4a-4ab29c45f336\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j8vsn" Nov 24 11:54:42 crc kubenswrapper[4789]: I1124 11:54:42.757638 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34b9fe12-ae2c-4754-bf4a-4ab29c45f336-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-j8vsn\" (UID: \"34b9fe12-ae2c-4754-bf4a-4ab29c45f336\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j8vsn" Nov 24 11:54:42 crc kubenswrapper[4789]: I1124 11:54:42.859207 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/34b9fe12-ae2c-4754-bf4a-4ab29c45f336-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-j8vsn\" (UID: \"34b9fe12-ae2c-4754-bf4a-4ab29c45f336\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j8vsn" Nov 24 11:54:42 crc kubenswrapper[4789]: I1124 11:54:42.859282 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34b9fe12-ae2c-4754-bf4a-4ab29c45f336-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-j8vsn\" (UID: \"34b9fe12-ae2c-4754-bf4a-4ab29c45f336\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j8vsn" Nov 24 11:54:42 crc kubenswrapper[4789]: I1124 11:54:42.859405 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9vmk\" (UniqueName: \"kubernetes.io/projected/34b9fe12-ae2c-4754-bf4a-4ab29c45f336-kube-api-access-j9vmk\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-j8vsn\" (UID: \"34b9fe12-ae2c-4754-bf4a-4ab29c45f336\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j8vsn" Nov 24 11:54:42 crc kubenswrapper[4789]: I1124 11:54:42.864713 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/34b9fe12-ae2c-4754-bf4a-4ab29c45f336-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-j8vsn\" (UID: \"34b9fe12-ae2c-4754-bf4a-4ab29c45f336\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j8vsn" Nov 24 11:54:42 crc kubenswrapper[4789]: I1124 11:54:42.865810 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34b9fe12-ae2c-4754-bf4a-4ab29c45f336-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-j8vsn\" (UID: \"34b9fe12-ae2c-4754-bf4a-4ab29c45f336\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j8vsn" Nov 24 11:54:42 crc kubenswrapper[4789]: I1124 11:54:42.886552 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9vmk\" (UniqueName: \"kubernetes.io/projected/34b9fe12-ae2c-4754-bf4a-4ab29c45f336-kube-api-access-j9vmk\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-j8vsn\" (UID: \"34b9fe12-ae2c-4754-bf4a-4ab29c45f336\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j8vsn" Nov 24 11:54:42 crc kubenswrapper[4789]: I1124 11:54:42.935342 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j8vsn" Nov 24 11:54:43 crc kubenswrapper[4789]: I1124 11:54:43.500905 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j8vsn"] Nov 24 11:54:43 crc kubenswrapper[4789]: I1124 11:54:43.544151 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j8vsn" event={"ID":"34b9fe12-ae2c-4754-bf4a-4ab29c45f336","Type":"ContainerStarted","Data":"1da15f7fc44cb15810e3ec8625faef01404c837d42cdd18633e5d45830ce62dd"} Nov 24 11:54:44 crc kubenswrapper[4789]: I1124 11:54:44.567033 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j8vsn" event={"ID":"34b9fe12-ae2c-4754-bf4a-4ab29c45f336","Type":"ContainerStarted","Data":"b6bdf3a654a2760eb69cc8bb8ba2844461b08118f85ed32c5c02f242ffee7dd2"} Nov 24 11:54:50 crc kubenswrapper[4789]: I1124 11:54:50.162829 4789 patch_prober.go:28] interesting pod/machine-config-daemon-9czvn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:54:50 crc kubenswrapper[4789]: I1124 11:54:50.163501 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:54:50 crc kubenswrapper[4789]: I1124 11:54:50.163564 4789 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" Nov 24 11:54:50 crc kubenswrapper[4789]: I1124 11:54:50.164447 4789 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"35c18d54a6d963863f1131173b65be0814f48cc37a6950d4c230cb7fa15e65d4"} pod="openshift-machine-config-operator/machine-config-daemon-9czvn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 11:54:50 crc kubenswrapper[4789]: I1124 11:54:50.164588 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" containerID="cri-o://35c18d54a6d963863f1131173b65be0814f48cc37a6950d4c230cb7fa15e65d4" gracePeriod=600 Nov 24 11:54:50 crc kubenswrapper[4789]: E1124 11:54:50.305888 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 11:54:50 crc kubenswrapper[4789]: I1124 11:54:50.623274 4789 generic.go:334] "Generic (PLEG): container finished" podID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerID="35c18d54a6d963863f1131173b65be0814f48cc37a6950d4c230cb7fa15e65d4" exitCode=0 Nov 24 11:54:50 crc kubenswrapper[4789]: I1124 11:54:50.623319 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" event={"ID":"30c4a832-f0e4-481b-a474-3ecea86049f6","Type":"ContainerDied","Data":"35c18d54a6d963863f1131173b65be0814f48cc37a6950d4c230cb7fa15e65d4"} Nov 24 11:54:50 crc kubenswrapper[4789]: I1124 11:54:50.623354 4789 scope.go:117] "RemoveContainer" containerID="a7f4024a35602eb88a760e42e4dc78156ab6feb43e0ae706700d1e332b76e45c" Nov 24 11:54:50 crc kubenswrapper[4789]: I1124 11:54:50.623926 4789 scope.go:117] "RemoveContainer" containerID="35c18d54a6d963863f1131173b65be0814f48cc37a6950d4c230cb7fa15e65d4" Nov 24 11:54:50 crc kubenswrapper[4789]: E1124 11:54:50.624263 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 11:54:50 crc kubenswrapper[4789]: I1124 11:54:50.648886 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j8vsn" podStartSLOduration=8.245024491 podStartE2EDuration="8.648868353s" podCreationTimestamp="2025-11-24 11:54:42 +0000 UTC" firstStartedPulling="2025-11-24 11:54:43.490333069 +0000 UTC m=+1466.072804448" lastFinishedPulling="2025-11-24 11:54:43.894176941 +0000 UTC m=+1466.476648310" observedRunningTime="2025-11-24 11:54:44.587860321 +0000 UTC m=+1467.170331740" watchObservedRunningTime="2025-11-24 11:54:50.648868353 +0000 UTC m=+1473.231339732" Nov 24 11:54:50 crc kubenswrapper[4789]: I1124 11:54:50.770329 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ttzds" Nov 24 11:54:50 crc kubenswrapper[4789]: I1124 11:54:50.822847 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ttzds" Nov 24 11:54:51 crc kubenswrapper[4789]: I1124 11:54:51.419789 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ttzds"] Nov 24 11:54:51 crc kubenswrapper[4789]: I1124 11:54:51.603188 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dns7w"] Nov 24 11:54:51 crc kubenswrapper[4789]: I1124 11:54:51.603476 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dns7w" podUID="5cab2fff-81a2-48d7-b216-28abaf890739" containerName="registry-server" containerID="cri-o://a9f7012171355b0ffa4c48ce30aa07d6b7021995a497fb9818e746846999a1f8" gracePeriod=2 Nov 24 11:54:52 crc kubenswrapper[4789]: I1124 11:54:52.133552 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dns7w" Nov 24 11:54:52 crc kubenswrapper[4789]: I1124 11:54:52.252856 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cab2fff-81a2-48d7-b216-28abaf890739-utilities\") pod \"5cab2fff-81a2-48d7-b216-28abaf890739\" (UID: \"5cab2fff-81a2-48d7-b216-28abaf890739\") " Nov 24 11:54:52 crc kubenswrapper[4789]: I1124 11:54:52.252928 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9n7c\" (UniqueName: \"kubernetes.io/projected/5cab2fff-81a2-48d7-b216-28abaf890739-kube-api-access-d9n7c\") pod \"5cab2fff-81a2-48d7-b216-28abaf890739\" (UID: \"5cab2fff-81a2-48d7-b216-28abaf890739\") " Nov 24 11:54:52 crc kubenswrapper[4789]: I1124 11:54:52.253895 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cab2fff-81a2-48d7-b216-28abaf890739-catalog-content\") pod \"5cab2fff-81a2-48d7-b216-28abaf890739\" (UID: \"5cab2fff-81a2-48d7-b216-28abaf890739\") " Nov 24 11:54:52 crc kubenswrapper[4789]: I1124 11:54:52.254330 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cab2fff-81a2-48d7-b216-28abaf890739-utilities" (OuterVolumeSpecName: "utilities") pod "5cab2fff-81a2-48d7-b216-28abaf890739" (UID: "5cab2fff-81a2-48d7-b216-28abaf890739"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:54:52 crc kubenswrapper[4789]: I1124 11:54:52.258118 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cab2fff-81a2-48d7-b216-28abaf890739-kube-api-access-d9n7c" (OuterVolumeSpecName: "kube-api-access-d9n7c") pod "5cab2fff-81a2-48d7-b216-28abaf890739" (UID: "5cab2fff-81a2-48d7-b216-28abaf890739"). InnerVolumeSpecName "kube-api-access-d9n7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:54:52 crc kubenswrapper[4789]: I1124 11:54:52.350799 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cab2fff-81a2-48d7-b216-28abaf890739-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5cab2fff-81a2-48d7-b216-28abaf890739" (UID: "5cab2fff-81a2-48d7-b216-28abaf890739"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:54:52 crc kubenswrapper[4789]: I1124 11:54:52.355741 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cab2fff-81a2-48d7-b216-28abaf890739-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:54:52 crc kubenswrapper[4789]: I1124 11:54:52.355775 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cab2fff-81a2-48d7-b216-28abaf890739-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:54:52 crc kubenswrapper[4789]: I1124 11:54:52.355785 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9n7c\" (UniqueName: \"kubernetes.io/projected/5cab2fff-81a2-48d7-b216-28abaf890739-kube-api-access-d9n7c\") on node \"crc\" DevicePath \"\"" Nov 24 11:54:52 crc kubenswrapper[4789]: I1124 11:54:52.654441 4789 generic.go:334] "Generic (PLEG): container finished" podID="5cab2fff-81a2-48d7-b216-28abaf890739" containerID="a9f7012171355b0ffa4c48ce30aa07d6b7021995a497fb9818e746846999a1f8" exitCode=0 Nov 24 11:54:52 crc kubenswrapper[4789]: I1124 11:54:52.654551 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dns7w" Nov 24 11:54:52 crc kubenswrapper[4789]: I1124 11:54:52.654512 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dns7w" event={"ID":"5cab2fff-81a2-48d7-b216-28abaf890739","Type":"ContainerDied","Data":"a9f7012171355b0ffa4c48ce30aa07d6b7021995a497fb9818e746846999a1f8"} Nov 24 11:54:52 crc kubenswrapper[4789]: I1124 11:54:52.654690 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dns7w" event={"ID":"5cab2fff-81a2-48d7-b216-28abaf890739","Type":"ContainerDied","Data":"332e6b9e3502a50c66b0469c19efdea8d810027d963e3c11ba0ce4b32347d638"} Nov 24 11:54:52 crc kubenswrapper[4789]: I1124 11:54:52.654712 4789 scope.go:117] "RemoveContainer" containerID="a9f7012171355b0ffa4c48ce30aa07d6b7021995a497fb9818e746846999a1f8" Nov 24 11:54:52 crc kubenswrapper[4789]: I1124 11:54:52.678363 4789 scope.go:117] "RemoveContainer" containerID="f176edfa96eba44dd8fe608e6b20ee2a624bbc290f27b7de6a97da9a8abecf7f" Nov 24 11:54:52 crc kubenswrapper[4789]: I1124 11:54:52.696591 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dns7w"] Nov 24 11:54:52 crc kubenswrapper[4789]: I1124 11:54:52.707780 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dns7w"] Nov 24 11:54:52 crc kubenswrapper[4789]: I1124 11:54:52.740834 4789 scope.go:117] "RemoveContainer" containerID="e9198e8513ccb91b86a364aa78002c2923006b7f782d1d1bc46e0a5290424f5d" Nov 24 11:54:52 crc kubenswrapper[4789]: I1124 11:54:52.765680 4789 scope.go:117] "RemoveContainer" containerID="a9f7012171355b0ffa4c48ce30aa07d6b7021995a497fb9818e746846999a1f8" Nov 24 11:54:52 crc kubenswrapper[4789]: E1124 11:54:52.767891 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9f7012171355b0ffa4c48ce30aa07d6b7021995a497fb9818e746846999a1f8\": container with ID starting with a9f7012171355b0ffa4c48ce30aa07d6b7021995a497fb9818e746846999a1f8 not found: ID does not exist" containerID="a9f7012171355b0ffa4c48ce30aa07d6b7021995a497fb9818e746846999a1f8" Nov 24 11:54:52 crc kubenswrapper[4789]: I1124 11:54:52.767925 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9f7012171355b0ffa4c48ce30aa07d6b7021995a497fb9818e746846999a1f8"} err="failed to get container status \"a9f7012171355b0ffa4c48ce30aa07d6b7021995a497fb9818e746846999a1f8\": rpc error: code = NotFound desc = could not find container \"a9f7012171355b0ffa4c48ce30aa07d6b7021995a497fb9818e746846999a1f8\": container with ID starting with a9f7012171355b0ffa4c48ce30aa07d6b7021995a497fb9818e746846999a1f8 not found: ID does not exist" Nov 24 11:54:52 crc kubenswrapper[4789]: I1124 11:54:52.767949 4789 scope.go:117] "RemoveContainer" containerID="f176edfa96eba44dd8fe608e6b20ee2a624bbc290f27b7de6a97da9a8abecf7f" Nov 24 11:54:52 crc kubenswrapper[4789]: E1124 11:54:52.768624 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f176edfa96eba44dd8fe608e6b20ee2a624bbc290f27b7de6a97da9a8abecf7f\": container with ID starting with f176edfa96eba44dd8fe608e6b20ee2a624bbc290f27b7de6a97da9a8abecf7f not found: ID does not exist" containerID="f176edfa96eba44dd8fe608e6b20ee2a624bbc290f27b7de6a97da9a8abecf7f" Nov 24 11:54:52 crc kubenswrapper[4789]: I1124 11:54:52.768711 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f176edfa96eba44dd8fe608e6b20ee2a624bbc290f27b7de6a97da9a8abecf7f"} err="failed to get container status \"f176edfa96eba44dd8fe608e6b20ee2a624bbc290f27b7de6a97da9a8abecf7f\": rpc error: code = NotFound desc = could not find container \"f176edfa96eba44dd8fe608e6b20ee2a624bbc290f27b7de6a97da9a8abecf7f\": container with ID starting with f176edfa96eba44dd8fe608e6b20ee2a624bbc290f27b7de6a97da9a8abecf7f not found: ID does not exist" Nov 24 11:54:52 crc kubenswrapper[4789]: I1124 11:54:52.768748 4789 scope.go:117] "RemoveContainer" containerID="e9198e8513ccb91b86a364aa78002c2923006b7f782d1d1bc46e0a5290424f5d" Nov 24 11:54:52 crc kubenswrapper[4789]: E1124 11:54:52.769131 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9198e8513ccb91b86a364aa78002c2923006b7f782d1d1bc46e0a5290424f5d\": container with ID starting with e9198e8513ccb91b86a364aa78002c2923006b7f782d1d1bc46e0a5290424f5d not found: ID does not exist" containerID="e9198e8513ccb91b86a364aa78002c2923006b7f782d1d1bc46e0a5290424f5d" Nov 24 11:54:52 crc kubenswrapper[4789]: I1124 11:54:52.769157 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9198e8513ccb91b86a364aa78002c2923006b7f782d1d1bc46e0a5290424f5d"} err="failed to get container status \"e9198e8513ccb91b86a364aa78002c2923006b7f782d1d1bc46e0a5290424f5d\": rpc error: code = NotFound desc = could not find container \"e9198e8513ccb91b86a364aa78002c2923006b7f782d1d1bc46e0a5290424f5d\": container with ID starting with e9198e8513ccb91b86a364aa78002c2923006b7f782d1d1bc46e0a5290424f5d not found: ID does not exist" Nov 24 11:54:54 crc kubenswrapper[4789]: I1124 11:54:54.181395 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cab2fff-81a2-48d7-b216-28abaf890739" path="/var/lib/kubelet/pods/5cab2fff-81a2-48d7-b216-28abaf890739/volumes" Nov 24 11:55:03 crc kubenswrapper[4789]: I1124 11:55:03.169425 4789 scope.go:117] "RemoveContainer" containerID="35c18d54a6d963863f1131173b65be0814f48cc37a6950d4c230cb7fa15e65d4" Nov 24 11:55:03 crc kubenswrapper[4789]: E1124 11:55:03.170262 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 11:55:15 crc kubenswrapper[4789]: I1124 11:55:15.168925 4789 scope.go:117] "RemoveContainer" containerID="35c18d54a6d963863f1131173b65be0814f48cc37a6950d4c230cb7fa15e65d4" Nov 24 11:55:15 crc kubenswrapper[4789]: E1124 11:55:15.169851 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 11:55:27 crc kubenswrapper[4789]: I1124 11:55:27.169122 4789 scope.go:117] "RemoveContainer" containerID="35c18d54a6d963863f1131173b65be0814f48cc37a6950d4c230cb7fa15e65d4" Nov 24 11:55:27 crc kubenswrapper[4789]: E1124 11:55:27.169816 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 11:55:39 crc kubenswrapper[4789]: I1124 11:55:39.170054 4789 scope.go:117] "RemoveContainer" containerID="35c18d54a6d963863f1131173b65be0814f48cc37a6950d4c230cb7fa15e65d4" Nov 24 11:55:39 crc kubenswrapper[4789]: E1124 11:55:39.171040 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 11:55:45 crc kubenswrapper[4789]: I1124 11:55:45.066889 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-zqv9q"] Nov 24 11:55:45 crc kubenswrapper[4789]: I1124 11:55:45.085248 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-jsdzm"] Nov 24 11:55:45 crc kubenswrapper[4789]: I1124 11:55:45.093590 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-e52f-account-create-5n95s"] Nov 24 11:55:45 crc kubenswrapper[4789]: I1124 11:55:45.101868 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-cd25-account-create-56jpk"] Nov 24 11:55:45 crc kubenswrapper[4789]: I1124 11:55:45.108004 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-cd25-account-create-56jpk"] Nov 24 11:55:45 crc kubenswrapper[4789]: I1124 11:55:45.114585 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-jsdzm"] Nov 24 11:55:45 crc kubenswrapper[4789]: I1124 11:55:45.120885 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-e52f-account-create-5n95s"] Nov 24 11:55:45 crc kubenswrapper[4789]: I1124 11:55:45.127549 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-zqv9q"] Nov 24 11:55:46 crc kubenswrapper[4789]: I1124 11:55:46.050275 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-wdn9d"] Nov 24 11:55:46 crc kubenswrapper[4789]: I1124 11:55:46.061951 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-ccb9-account-create-n9jzt"] Nov 24 11:55:46 crc kubenswrapper[4789]: I1124 11:55:46.070222 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-wdn9d"] Nov 24 11:55:46 crc kubenswrapper[4789]: I1124 11:55:46.077554 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-ccb9-account-create-n9jzt"] Nov 24 11:55:46 crc kubenswrapper[4789]: I1124 11:55:46.180601 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50d81cc5-1abb-4c0a-9b4c-e9d69b0e0194" path="/var/lib/kubelet/pods/50d81cc5-1abb-4c0a-9b4c-e9d69b0e0194/volumes" Nov 24 11:55:46 crc kubenswrapper[4789]: I1124 11:55:46.181240 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91441784-0780-4721-bed1-4197f7f24cdb" path="/var/lib/kubelet/pods/91441784-0780-4721-bed1-4197f7f24cdb/volumes" Nov 24 11:55:46 crc kubenswrapper[4789]: I1124 11:55:46.181785 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a18094e0-852b-4365-b8c8-a65185dc446e" path="/var/lib/kubelet/pods/a18094e0-852b-4365-b8c8-a65185dc446e/volumes" Nov 24 11:55:46 crc kubenswrapper[4789]: I1124 11:55:46.182393 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6a46f49-9d70-4876-a8ba-070a44606a93" path="/var/lib/kubelet/pods/b6a46f49-9d70-4876-a8ba-070a44606a93/volumes" Nov 24 11:55:46 crc kubenswrapper[4789]: I1124 11:55:46.183545 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8e0bf0e-258d-41c3-af5b-86b1413d0d9b" path="/var/lib/kubelet/pods/b8e0bf0e-258d-41c3-af5b-86b1413d0d9b/volumes" Nov 24 11:55:46 crc kubenswrapper[4789]: I1124 11:55:46.184256 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd8a3a60-2e4e-461d-be45-3b2d8db511ba" path="/var/lib/kubelet/pods/fd8a3a60-2e4e-461d-be45-3b2d8db511ba/volumes" Nov 24 11:55:53 crc kubenswrapper[4789]: I1124 11:55:53.170051 4789 scope.go:117] "RemoveContainer" containerID="35c18d54a6d963863f1131173b65be0814f48cc37a6950d4c230cb7fa15e65d4" Nov 24 11:55:53 crc kubenswrapper[4789]: E1124 11:55:53.171235 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 11:56:01 crc kubenswrapper[4789]: I1124 11:56:01.325335 4789 generic.go:334] "Generic (PLEG): container finished" podID="34b9fe12-ae2c-4754-bf4a-4ab29c45f336" containerID="b6bdf3a654a2760eb69cc8bb8ba2844461b08118f85ed32c5c02f242ffee7dd2" exitCode=0 Nov 24 11:56:01 crc kubenswrapper[4789]: I1124 11:56:01.325495 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j8vsn" event={"ID":"34b9fe12-ae2c-4754-bf4a-4ab29c45f336","Type":"ContainerDied","Data":"b6bdf3a654a2760eb69cc8bb8ba2844461b08118f85ed32c5c02f242ffee7dd2"} Nov 24 11:56:02 crc kubenswrapper[4789]: I1124 11:56:02.728807 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j8vsn" Nov 24 11:56:02 crc kubenswrapper[4789]: I1124 11:56:02.898392 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9vmk\" (UniqueName: \"kubernetes.io/projected/34b9fe12-ae2c-4754-bf4a-4ab29c45f336-kube-api-access-j9vmk\") pod \"34b9fe12-ae2c-4754-bf4a-4ab29c45f336\" (UID: \"34b9fe12-ae2c-4754-bf4a-4ab29c45f336\") " Nov 24 11:56:02 crc kubenswrapper[4789]: I1124 11:56:02.898527 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/34b9fe12-ae2c-4754-bf4a-4ab29c45f336-ssh-key\") pod \"34b9fe12-ae2c-4754-bf4a-4ab29c45f336\" (UID: \"34b9fe12-ae2c-4754-bf4a-4ab29c45f336\") " Nov 24 11:56:02 crc kubenswrapper[4789]: I1124 11:56:02.898827 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34b9fe12-ae2c-4754-bf4a-4ab29c45f336-inventory\") pod \"34b9fe12-ae2c-4754-bf4a-4ab29c45f336\" (UID: \"34b9fe12-ae2c-4754-bf4a-4ab29c45f336\") " Nov 24 11:56:02 crc kubenswrapper[4789]: I1124 11:56:02.904013 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34b9fe12-ae2c-4754-bf4a-4ab29c45f336-kube-api-access-j9vmk" (OuterVolumeSpecName: "kube-api-access-j9vmk") pod "34b9fe12-ae2c-4754-bf4a-4ab29c45f336" (UID: "34b9fe12-ae2c-4754-bf4a-4ab29c45f336"). InnerVolumeSpecName "kube-api-access-j9vmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:02 crc kubenswrapper[4789]: I1124 11:56:02.943083 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34b9fe12-ae2c-4754-bf4a-4ab29c45f336-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "34b9fe12-ae2c-4754-bf4a-4ab29c45f336" (UID: "34b9fe12-ae2c-4754-bf4a-4ab29c45f336"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:02 crc kubenswrapper[4789]: I1124 11:56:02.944875 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34b9fe12-ae2c-4754-bf4a-4ab29c45f336-inventory" (OuterVolumeSpecName: "inventory") pod "34b9fe12-ae2c-4754-bf4a-4ab29c45f336" (UID: "34b9fe12-ae2c-4754-bf4a-4ab29c45f336"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:03 crc kubenswrapper[4789]: I1124 11:56:03.001384 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9vmk\" (UniqueName: \"kubernetes.io/projected/34b9fe12-ae2c-4754-bf4a-4ab29c45f336-kube-api-access-j9vmk\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:03 crc kubenswrapper[4789]: I1124 11:56:03.001431 4789 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/34b9fe12-ae2c-4754-bf4a-4ab29c45f336-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:03 crc kubenswrapper[4789]: I1124 11:56:03.001444 4789 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34b9fe12-ae2c-4754-bf4a-4ab29c45f336-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:03 crc kubenswrapper[4789]: I1124 11:56:03.343868 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j8vsn" event={"ID":"34b9fe12-ae2c-4754-bf4a-4ab29c45f336","Type":"ContainerDied","Data":"1da15f7fc44cb15810e3ec8625faef01404c837d42cdd18633e5d45830ce62dd"} Nov 24 11:56:03 crc kubenswrapper[4789]: I1124 11:56:03.343925 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1da15f7fc44cb15810e3ec8625faef01404c837d42cdd18633e5d45830ce62dd" Nov 24 11:56:03 crc kubenswrapper[4789]: I1124 11:56:03.343938 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j8vsn" Nov 24 11:56:03 crc kubenswrapper[4789]: I1124 11:56:03.434334 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5fkhb"] Nov 24 11:56:03 crc kubenswrapper[4789]: E1124 11:56:03.434693 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cab2fff-81a2-48d7-b216-28abaf890739" containerName="extract-content" Nov 24 11:56:03 crc kubenswrapper[4789]: I1124 11:56:03.434709 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cab2fff-81a2-48d7-b216-28abaf890739" containerName="extract-content" Nov 24 11:56:03 crc kubenswrapper[4789]: E1124 11:56:03.434724 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b9fe12-ae2c-4754-bf4a-4ab29c45f336" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 11:56:03 crc kubenswrapper[4789]: I1124 11:56:03.434731 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b9fe12-ae2c-4754-bf4a-4ab29c45f336" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 11:56:03 crc kubenswrapper[4789]: E1124 11:56:03.434740 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cab2fff-81a2-48d7-b216-28abaf890739" containerName="extract-utilities" Nov 24 11:56:03 crc kubenswrapper[4789]: I1124 11:56:03.434746 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cab2fff-81a2-48d7-b216-28abaf890739" containerName="extract-utilities" Nov 24 11:56:03 crc kubenswrapper[4789]: E1124 11:56:03.434776 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cab2fff-81a2-48d7-b216-28abaf890739" containerName="registry-server" Nov 24 11:56:03 crc kubenswrapper[4789]: I1124 11:56:03.434782 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cab2fff-81a2-48d7-b216-28abaf890739" containerName="registry-server" Nov 24 11:56:03 crc kubenswrapper[4789]: I1124 11:56:03.434950 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cab2fff-81a2-48d7-b216-28abaf890739" containerName="registry-server" Nov 24 11:56:03 crc kubenswrapper[4789]: I1124 11:56:03.434967 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="34b9fe12-ae2c-4754-bf4a-4ab29c45f336" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 11:56:03 crc kubenswrapper[4789]: I1124 11:56:03.435611 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5fkhb" Nov 24 11:56:03 crc kubenswrapper[4789]: I1124 11:56:03.439153 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:56:03 crc kubenswrapper[4789]: I1124 11:56:03.439214 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:56:03 crc kubenswrapper[4789]: I1124 11:56:03.439363 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:56:03 crc kubenswrapper[4789]: I1124 11:56:03.439378 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lhfjg" Nov 24 11:56:03 crc kubenswrapper[4789]: I1124 11:56:03.463813 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5fkhb"] Nov 24 11:56:03 crc kubenswrapper[4789]: I1124 11:56:03.512369 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6ac9a80b-ec9c-43cc-b16d-d8113619caec-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5fkhb\" (UID: \"6ac9a80b-ec9c-43cc-b16d-d8113619caec\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5fkhb" Nov 24 11:56:03 crc kubenswrapper[4789]: I1124 11:56:03.512442 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ac9a80b-ec9c-43cc-b16d-d8113619caec-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5fkhb\" (UID: \"6ac9a80b-ec9c-43cc-b16d-d8113619caec\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5fkhb" Nov 24 11:56:03 crc kubenswrapper[4789]: I1124 11:56:03.512567 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8vjw\" (UniqueName: \"kubernetes.io/projected/6ac9a80b-ec9c-43cc-b16d-d8113619caec-kube-api-access-f8vjw\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5fkhb\" (UID: \"6ac9a80b-ec9c-43cc-b16d-d8113619caec\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5fkhb" Nov 24 11:56:03 crc kubenswrapper[4789]: I1124 11:56:03.613723 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6ac9a80b-ec9c-43cc-b16d-d8113619caec-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5fkhb\" (UID: \"6ac9a80b-ec9c-43cc-b16d-d8113619caec\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5fkhb" Nov 24 11:56:03 crc kubenswrapper[4789]: I1124 11:56:03.613783 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ac9a80b-ec9c-43cc-b16d-d8113619caec-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5fkhb\" (UID: \"6ac9a80b-ec9c-43cc-b16d-d8113619caec\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5fkhb" Nov 24 11:56:03 crc kubenswrapper[4789]: I1124 11:56:03.613868 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8vjw\" (UniqueName: \"kubernetes.io/projected/6ac9a80b-ec9c-43cc-b16d-d8113619caec-kube-api-access-f8vjw\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5fkhb\" (UID: \"6ac9a80b-ec9c-43cc-b16d-d8113619caec\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5fkhb" Nov 24 11:56:03 crc kubenswrapper[4789]: I1124 11:56:03.618815 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ac9a80b-ec9c-43cc-b16d-d8113619caec-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5fkhb\" (UID: \"6ac9a80b-ec9c-43cc-b16d-d8113619caec\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5fkhb" Nov 24 11:56:03 crc kubenswrapper[4789]: I1124 11:56:03.622259 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6ac9a80b-ec9c-43cc-b16d-d8113619caec-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5fkhb\" (UID: \"6ac9a80b-ec9c-43cc-b16d-d8113619caec\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5fkhb" Nov 24 11:56:03 crc kubenswrapper[4789]: I1124 11:56:03.635217 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8vjw\" (UniqueName: \"kubernetes.io/projected/6ac9a80b-ec9c-43cc-b16d-d8113619caec-kube-api-access-f8vjw\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5fkhb\" (UID: \"6ac9a80b-ec9c-43cc-b16d-d8113619caec\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5fkhb" Nov 24 11:56:03 crc kubenswrapper[4789]: I1124 11:56:03.762325 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5fkhb" Nov 24 11:56:04 crc kubenswrapper[4789]: I1124 11:56:04.253348 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5fkhb"] Nov 24 11:56:04 crc kubenswrapper[4789]: I1124 11:56:04.273170 4789 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 11:56:04 crc kubenswrapper[4789]: I1124 11:56:04.352331 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5fkhb" event={"ID":"6ac9a80b-ec9c-43cc-b16d-d8113619caec","Type":"ContainerStarted","Data":"2fa0b45fdd4018971b292c296d6ec66469ff748bdfb851da9739a3a1c1ae6cbc"} Nov 24 11:56:05 crc kubenswrapper[4789]: I1124 11:56:05.363089 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5fkhb" event={"ID":"6ac9a80b-ec9c-43cc-b16d-d8113619caec","Type":"ContainerStarted","Data":"1f08834065508be7035764ed5c8ad936eb288b2e1fac7ba73a49bda8e1b8d870"} Nov 24 11:56:05 crc kubenswrapper[4789]: I1124 11:56:05.403176 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5fkhb" podStartSLOduration=1.977046426 podStartE2EDuration="2.403155056s" podCreationTimestamp="2025-11-24 11:56:03 +0000 UTC" firstStartedPulling="2025-11-24 11:56:04.272905079 +0000 UTC m=+1546.855376458" lastFinishedPulling="2025-11-24 11:56:04.699013709 +0000 UTC m=+1547.281485088" observedRunningTime="2025-11-24 11:56:05.396119659 +0000 UTC m=+1547.978591058" watchObservedRunningTime="2025-11-24 11:56:05.403155056 +0000 UTC m=+1547.985626435" Nov 24 11:56:06 crc kubenswrapper[4789]: I1124 11:56:06.168924 4789 scope.go:117] "RemoveContainer" containerID="35c18d54a6d963863f1131173b65be0814f48cc37a6950d4c230cb7fa15e65d4" Nov 24 11:56:06 crc kubenswrapper[4789]: E1124 11:56:06.169309 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 11:56:10 crc kubenswrapper[4789]: I1124 11:56:10.406281 4789 generic.go:334] "Generic (PLEG): container finished" podID="6ac9a80b-ec9c-43cc-b16d-d8113619caec" containerID="1f08834065508be7035764ed5c8ad936eb288b2e1fac7ba73a49bda8e1b8d870" exitCode=0 Nov 24 11:56:10 crc kubenswrapper[4789]: I1124 11:56:10.406363 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5fkhb" event={"ID":"6ac9a80b-ec9c-43cc-b16d-d8113619caec","Type":"ContainerDied","Data":"1f08834065508be7035764ed5c8ad936eb288b2e1fac7ba73a49bda8e1b8d870"} Nov 24 11:56:11 crc kubenswrapper[4789]: I1124 11:56:11.029114 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-ncww2"] Nov 24 11:56:11 crc kubenswrapper[4789]: I1124 11:56:11.037210 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-ncww2"] Nov 24 11:56:11 crc kubenswrapper[4789]: I1124 11:56:11.792749 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5fkhb" Nov 24 11:56:11 crc kubenswrapper[4789]: I1124 11:56:11.968729 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8vjw\" (UniqueName: \"kubernetes.io/projected/6ac9a80b-ec9c-43cc-b16d-d8113619caec-kube-api-access-f8vjw\") pod \"6ac9a80b-ec9c-43cc-b16d-d8113619caec\" (UID: \"6ac9a80b-ec9c-43cc-b16d-d8113619caec\") " Nov 24 11:56:11 crc kubenswrapper[4789]: I1124 11:56:11.968809 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6ac9a80b-ec9c-43cc-b16d-d8113619caec-ssh-key\") pod \"6ac9a80b-ec9c-43cc-b16d-d8113619caec\" (UID: \"6ac9a80b-ec9c-43cc-b16d-d8113619caec\") " Nov 24 11:56:11 crc kubenswrapper[4789]: I1124 11:56:11.968933 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ac9a80b-ec9c-43cc-b16d-d8113619caec-inventory\") pod \"6ac9a80b-ec9c-43cc-b16d-d8113619caec\" (UID: \"6ac9a80b-ec9c-43cc-b16d-d8113619caec\") " Nov 24 11:56:11 crc kubenswrapper[4789]: I1124 11:56:11.979314 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ac9a80b-ec9c-43cc-b16d-d8113619caec-kube-api-access-f8vjw" (OuterVolumeSpecName: "kube-api-access-f8vjw") pod "6ac9a80b-ec9c-43cc-b16d-d8113619caec" (UID: "6ac9a80b-ec9c-43cc-b16d-d8113619caec"). InnerVolumeSpecName "kube-api-access-f8vjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:11 crc kubenswrapper[4789]: I1124 11:56:11.996943 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ac9a80b-ec9c-43cc-b16d-d8113619caec-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6ac9a80b-ec9c-43cc-b16d-d8113619caec" (UID: "6ac9a80b-ec9c-43cc-b16d-d8113619caec"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:12 crc kubenswrapper[4789]: I1124 11:56:12.000822 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ac9a80b-ec9c-43cc-b16d-d8113619caec-inventory" (OuterVolumeSpecName: "inventory") pod "6ac9a80b-ec9c-43cc-b16d-d8113619caec" (UID: "6ac9a80b-ec9c-43cc-b16d-d8113619caec"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:12 crc kubenswrapper[4789]: I1124 11:56:12.071001 4789 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ac9a80b-ec9c-43cc-b16d-d8113619caec-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:12 crc kubenswrapper[4789]: I1124 11:56:12.071047 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8vjw\" (UniqueName: \"kubernetes.io/projected/6ac9a80b-ec9c-43cc-b16d-d8113619caec-kube-api-access-f8vjw\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:12 crc kubenswrapper[4789]: I1124 11:56:12.071062 4789 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6ac9a80b-ec9c-43cc-b16d-d8113619caec-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:12 crc kubenswrapper[4789]: I1124 11:56:12.186879 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62d7feaf-71e2-4d0e-b0b9-2f61eb421522" path="/var/lib/kubelet/pods/62d7feaf-71e2-4d0e-b0b9-2f61eb421522/volumes" Nov 24 11:56:12 crc kubenswrapper[4789]: I1124 11:56:12.424789 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5fkhb" event={"ID":"6ac9a80b-ec9c-43cc-b16d-d8113619caec","Type":"ContainerDied","Data":"2fa0b45fdd4018971b292c296d6ec66469ff748bdfb851da9739a3a1c1ae6cbc"} Nov 24 11:56:12 crc kubenswrapper[4789]: I1124 11:56:12.425078 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fa0b45fdd4018971b292c296d6ec66469ff748bdfb851da9739a3a1c1ae6cbc" Nov 24 11:56:12 crc kubenswrapper[4789]: I1124 11:56:12.424903 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5fkhb" Nov 24 11:56:12 crc kubenswrapper[4789]: I1124 11:56:12.496555 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-djqbm"] Nov 24 11:56:12 crc kubenswrapper[4789]: E1124 11:56:12.497032 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ac9a80b-ec9c-43cc-b16d-d8113619caec" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 11:56:12 crc kubenswrapper[4789]: I1124 11:56:12.497058 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ac9a80b-ec9c-43cc-b16d-d8113619caec" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 11:56:12 crc kubenswrapper[4789]: I1124 11:56:12.497278 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ac9a80b-ec9c-43cc-b16d-d8113619caec" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 11:56:12 crc kubenswrapper[4789]: I1124 11:56:12.498099 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-djqbm" Nov 24 11:56:12 crc kubenswrapper[4789]: I1124 11:56:12.501836 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:56:12 crc kubenswrapper[4789]: I1124 11:56:12.505370 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:56:12 crc kubenswrapper[4789]: I1124 11:56:12.505839 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:56:12 crc kubenswrapper[4789]: I1124 11:56:12.506099 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lhfjg" Nov 24 11:56:12 crc kubenswrapper[4789]: I1124 11:56:12.506767 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-djqbm"] Nov 24 11:56:12 crc kubenswrapper[4789]: I1124 11:56:12.680344 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5kqd\" (UniqueName: \"kubernetes.io/projected/f4833b4b-25fe-4457-bb87-72efdfe17034-kube-api-access-g5kqd\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-djqbm\" (UID: \"f4833b4b-25fe-4457-bb87-72efdfe17034\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-djqbm" Nov 24 11:56:12 crc kubenswrapper[4789]: I1124 11:56:12.680858 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f4833b4b-25fe-4457-bb87-72efdfe17034-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-djqbm\" (UID: \"f4833b4b-25fe-4457-bb87-72efdfe17034\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-djqbm" Nov 24 11:56:12 crc kubenswrapper[4789]: I1124 11:56:12.681147 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4833b4b-25fe-4457-bb87-72efdfe17034-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-djqbm\" (UID: \"f4833b4b-25fe-4457-bb87-72efdfe17034\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-djqbm" Nov 24 11:56:12 crc kubenswrapper[4789]: I1124 11:56:12.782211 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f4833b4b-25fe-4457-bb87-72efdfe17034-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-djqbm\" (UID: \"f4833b4b-25fe-4457-bb87-72efdfe17034\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-djqbm" Nov 24 11:56:12 crc kubenswrapper[4789]: I1124 11:56:12.782417 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4833b4b-25fe-4457-bb87-72efdfe17034-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-djqbm\" (UID: \"f4833b4b-25fe-4457-bb87-72efdfe17034\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-djqbm" Nov 24 11:56:12 crc kubenswrapper[4789]: I1124 11:56:12.782555 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5kqd\" (UniqueName: \"kubernetes.io/projected/f4833b4b-25fe-4457-bb87-72efdfe17034-kube-api-access-g5kqd\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-djqbm\" (UID: \"f4833b4b-25fe-4457-bb87-72efdfe17034\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-djqbm" Nov 24 11:56:12 crc kubenswrapper[4789]: I1124 11:56:12.787193 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4833b4b-25fe-4457-bb87-72efdfe17034-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-djqbm\" (UID: \"f4833b4b-25fe-4457-bb87-72efdfe17034\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-djqbm" Nov 24 11:56:12 crc kubenswrapper[4789]: I1124 11:56:12.787427 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f4833b4b-25fe-4457-bb87-72efdfe17034-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-djqbm\" (UID: \"f4833b4b-25fe-4457-bb87-72efdfe17034\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-djqbm" Nov 24 11:56:12 crc kubenswrapper[4789]: I1124 11:56:12.799749 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5kqd\" (UniqueName: \"kubernetes.io/projected/f4833b4b-25fe-4457-bb87-72efdfe17034-kube-api-access-g5kqd\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-djqbm\" (UID: \"f4833b4b-25fe-4457-bb87-72efdfe17034\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-djqbm" Nov 24 11:56:12 crc kubenswrapper[4789]: I1124 11:56:12.816107 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-djqbm" Nov 24 11:56:13 crc kubenswrapper[4789]: I1124 11:56:13.300558 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-djqbm"] Nov 24 11:56:13 crc kubenswrapper[4789]: I1124 11:56:13.435133 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-djqbm" event={"ID":"f4833b4b-25fe-4457-bb87-72efdfe17034","Type":"ContainerStarted","Data":"84e69928db655cac54be8a4faee35d812bcf6971f34cf9e0879e06a21fca19ad"} Nov 24 11:56:14 crc kubenswrapper[4789]: I1124 11:56:14.445979 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-djqbm" event={"ID":"f4833b4b-25fe-4457-bb87-72efdfe17034","Type":"ContainerStarted","Data":"d108960f6b0999e238c9eba51a54bd8e87178c9d158ca793c9cb158cbc5a0238"} Nov 24 11:56:14 crc kubenswrapper[4789]: I1124 11:56:14.467909 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-djqbm" podStartSLOduration=2.062950765 podStartE2EDuration="2.467864526s" podCreationTimestamp="2025-11-24 11:56:12 +0000 UTC" firstStartedPulling="2025-11-24 11:56:13.312035907 +0000 UTC m=+1555.894507286" lastFinishedPulling="2025-11-24 11:56:13.716949668 +0000 UTC m=+1556.299421047" observedRunningTime="2025-11-24 11:56:14.461747858 +0000 UTC m=+1557.044219257" watchObservedRunningTime="2025-11-24 11:56:14.467864526 +0000 UTC m=+1557.050335915" Nov 24 11:56:19 crc kubenswrapper[4789]: I1124 11:56:19.169804 4789 scope.go:117] "RemoveContainer" containerID="35c18d54a6d963863f1131173b65be0814f48cc37a6950d4c230cb7fa15e65d4" Nov 24 11:56:19 crc kubenswrapper[4789]: E1124 11:56:19.170508 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 11:56:23 crc kubenswrapper[4789]: I1124 11:56:23.046534 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-mzj6x"] Nov 24 11:56:23 crc kubenswrapper[4789]: I1124 11:56:23.055671 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-dc2mk"] Nov 24 11:56:23 crc kubenswrapper[4789]: I1124 11:56:23.064887 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-dc2mk"] Nov 24 11:56:23 crc kubenswrapper[4789]: I1124 11:56:23.071920 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-mzj6x"] Nov 24 11:56:24 crc kubenswrapper[4789]: I1124 11:56:24.039553 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-4176-account-create-6mldw"] Nov 24 11:56:24 crc kubenswrapper[4789]: I1124 11:56:24.046519 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-vplq8"] Nov 24 11:56:24 crc kubenswrapper[4789]: I1124 11:56:24.056181 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-9ffd-account-create-k89vc"] Nov 24 11:56:24 crc kubenswrapper[4789]: I1124 11:56:24.064707 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-3375-account-create-nntd8"] Nov 24 11:56:24 crc kubenswrapper[4789]: I1124 11:56:24.075715 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-9ffd-account-create-k89vc"] Nov 24 11:56:24 crc kubenswrapper[4789]: I1124 11:56:24.088814 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-vplq8"] Nov 24 11:56:24 crc kubenswrapper[4789]: I1124 11:56:24.099329 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-3375-account-create-nntd8"] Nov 24 11:56:24 crc kubenswrapper[4789]: I1124 11:56:24.107618 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-4176-account-create-6mldw"] Nov 24 11:56:24 crc kubenswrapper[4789]: I1124 11:56:24.197505 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20a4d7a1-39fa-4ab6-add9-7258bb865809" path="/var/lib/kubelet/pods/20a4d7a1-39fa-4ab6-add9-7258bb865809/volumes" Nov 24 11:56:24 crc kubenswrapper[4789]: I1124 11:56:24.198651 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d8dfa37-0258-4fa8-814f-52c167e55e9c" path="/var/lib/kubelet/pods/5d8dfa37-0258-4fa8-814f-52c167e55e9c/volumes" Nov 24 11:56:24 crc kubenswrapper[4789]: I1124 11:56:24.199525 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bba6a0a-259f-4a74-850e-2025f99757e6" path="/var/lib/kubelet/pods/6bba6a0a-259f-4a74-850e-2025f99757e6/volumes" Nov 24 11:56:24 crc kubenswrapper[4789]: I1124 11:56:24.200401 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85dfb49e-554c-415f-9add-67bb02165386" path="/var/lib/kubelet/pods/85dfb49e-554c-415f-9add-67bb02165386/volumes" Nov 24 11:56:24 crc kubenswrapper[4789]: I1124 11:56:24.202068 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91e382fe-d85a-44e5-8047-e3ddad1a85f4" path="/var/lib/kubelet/pods/91e382fe-d85a-44e5-8047-e3ddad1a85f4/volumes" Nov 24 11:56:24 crc kubenswrapper[4789]: I1124 11:56:24.202833 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a523a3ed-09c8-4752-8b89-562cbb1c80c1" path="/var/lib/kubelet/pods/a523a3ed-09c8-4752-8b89-562cbb1c80c1/volumes" Nov 24 11:56:30 crc kubenswrapper[4789]: I1124 11:56:30.169778 4789 scope.go:117] "RemoveContainer" containerID="35c18d54a6d963863f1131173b65be0814f48cc37a6950d4c230cb7fa15e65d4" Nov 24 11:56:30 crc kubenswrapper[4789]: E1124 11:56:30.170561 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 11:56:31 crc kubenswrapper[4789]: I1124 11:56:31.051892 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-rmhhs"] Nov 24 11:56:31 crc kubenswrapper[4789]: I1124 11:56:31.068778 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-rmhhs"] Nov 24 11:56:32 crc kubenswrapper[4789]: I1124 11:56:32.178744 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46218063-8c0c-4d2a-9693-1ee25e647520" path="/var/lib/kubelet/pods/46218063-8c0c-4d2a-9693-1ee25e647520/volumes" Nov 24 11:56:32 crc kubenswrapper[4789]: I1124 11:56:32.193995 4789 scope.go:117] "RemoveContainer" containerID="90756f3378ab8fb4aebe76b64a9808107d10151514456dfe42cd80f1ee0e539d" Nov 24 11:56:32 crc kubenswrapper[4789]: I1124 11:56:32.215044 4789 scope.go:117] "RemoveContainer" containerID="bef8fa7f119f7d23791353ff5bcfb5af673f41f97b0bd4ba04f5812c33b04d80" Nov 24 11:56:32 crc kubenswrapper[4789]: I1124 11:56:32.262894 4789 scope.go:117] "RemoveContainer" containerID="f2de067f8bccc7410e4b54afd320cb8c9e683c63b554abe973b4f0fc6423cf5b" Nov 24 11:56:32 crc kubenswrapper[4789]: I1124 11:56:32.305842 4789 scope.go:117] "RemoveContainer" containerID="7d3e6d13861a4724fa638f14c558d0f5c9a2c2dbb59ba3482c94312b453716c3" Nov 24 11:56:32 crc kubenswrapper[4789]: I1124 11:56:32.364037 4789 scope.go:117] "RemoveContainer" containerID="697b5f7294de6d915708d15465c6ac3301ba6fcc77c785d7366e7147d9b854d9" Nov 24 11:56:32 crc kubenswrapper[4789]: I1124 11:56:32.412637 4789 scope.go:117] "RemoveContainer" containerID="8b75825b4b3a9a89bee133c3bff20e812c1ca7aad481a968c500d8ae4551fd0c" Nov 24 11:56:32 crc kubenswrapper[4789]: I1124 11:56:32.439686 4789 scope.go:117] "RemoveContainer" containerID="6bf3515c7a28c2f7203a6efacf2f7955f88a0ce5274571d2f3224860370688a6" Nov 24 11:56:32 crc kubenswrapper[4789]: I1124 11:56:32.460060 4789 scope.go:117] "RemoveContainer" containerID="f14469d605d098a8407f8971827a51d2f70403e054d6e52ca2ac391f6d0e6abf" Nov 24 11:56:32 crc kubenswrapper[4789]: I1124 11:56:32.494908 4789 scope.go:117] "RemoveContainer" containerID="ba526d57ffe37ec8885d602f06d5139de1799881162498c5eda463bd5c268cf3" Nov 24 11:56:32 crc kubenswrapper[4789]: I1124 11:56:32.522113 4789 scope.go:117] "RemoveContainer" containerID="34cd99cb10cabf025b3f8220a3061cae389ee2f725ae0969fb2696ef640c86e4" Nov 24 11:56:32 crc kubenswrapper[4789]: I1124 11:56:32.548763 4789 scope.go:117] "RemoveContainer" containerID="98ec19b78e1773cd12a2bce81079e889c24c3b45919fad2faebb2c5d7093a893" Nov 24 11:56:32 crc kubenswrapper[4789]: I1124 11:56:32.588715 4789 scope.go:117] "RemoveContainer" containerID="894d5d19f0675d2aa6b24eb3551b3228fc1b6e0ec8f2c0a157e6a030fdd128d8" Nov 24 11:56:32 crc kubenswrapper[4789]: I1124 11:56:32.624520 4789 scope.go:117] "RemoveContainer" containerID="9133667257b57c7d071afe53d34c96c371437c3bd80b52d8e60bc9be1d6da32d" Nov 24 11:56:32 crc kubenswrapper[4789]: I1124 11:56:32.649258 4789 scope.go:117] "RemoveContainer" containerID="ab84db5a3cd50cfd4792c1a4cb6de8f1370640d139190120835d22b0a44e71ff" Nov 24 11:56:42 crc kubenswrapper[4789]: I1124 11:56:42.169734 4789 scope.go:117] "RemoveContainer" containerID="35c18d54a6d963863f1131173b65be0814f48cc37a6950d4c230cb7fa15e65d4" Nov 24 11:56:42 crc kubenswrapper[4789]: E1124 11:56:42.170389 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 11:56:54 crc kubenswrapper[4789]: I1124 11:56:54.046176 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-7s7v7"] Nov 24 11:56:54 crc kubenswrapper[4789]: I1124 11:56:54.053202 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-7s7v7"] Nov 24 11:56:54 crc kubenswrapper[4789]: I1124 11:56:54.169206 4789 scope.go:117] "RemoveContainer" containerID="35c18d54a6d963863f1131173b65be0814f48cc37a6950d4c230cb7fa15e65d4" Nov 24 11:56:54 crc kubenswrapper[4789]: E1124 11:56:54.169585 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 11:56:54 crc kubenswrapper[4789]: I1124 11:56:54.182114 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ce66a07-c046-4c6c-b5a5-443818f1b5db" path="/var/lib/kubelet/pods/7ce66a07-c046-4c6c-b5a5-443818f1b5db/volumes" Nov 24 11:56:56 crc kubenswrapper[4789]: I1124 11:56:56.847648 4789 generic.go:334] "Generic (PLEG): container finished" podID="f4833b4b-25fe-4457-bb87-72efdfe17034" containerID="d108960f6b0999e238c9eba51a54bd8e87178c9d158ca793c9cb158cbc5a0238" exitCode=0 Nov 24 11:56:56 crc kubenswrapper[4789]: I1124 11:56:56.847755 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-djqbm" event={"ID":"f4833b4b-25fe-4457-bb87-72efdfe17034","Type":"ContainerDied","Data":"d108960f6b0999e238c9eba51a54bd8e87178c9d158ca793c9cb158cbc5a0238"} Nov 24 11:56:58 crc kubenswrapper[4789]: I1124 11:56:58.249561 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-djqbm" Nov 24 11:56:58 crc kubenswrapper[4789]: I1124 11:56:58.374826 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f4833b4b-25fe-4457-bb87-72efdfe17034-ssh-key\") pod \"f4833b4b-25fe-4457-bb87-72efdfe17034\" (UID: \"f4833b4b-25fe-4457-bb87-72efdfe17034\") " Nov 24 11:56:58 crc kubenswrapper[4789]: I1124 11:56:58.374889 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4833b4b-25fe-4457-bb87-72efdfe17034-inventory\") pod \"f4833b4b-25fe-4457-bb87-72efdfe17034\" (UID: \"f4833b4b-25fe-4457-bb87-72efdfe17034\") " Nov 24 11:56:58 crc kubenswrapper[4789]: I1124 11:56:58.374929 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5kqd\" (UniqueName: \"kubernetes.io/projected/f4833b4b-25fe-4457-bb87-72efdfe17034-kube-api-access-g5kqd\") pod \"f4833b4b-25fe-4457-bb87-72efdfe17034\" (UID: \"f4833b4b-25fe-4457-bb87-72efdfe17034\") " Nov 24 11:56:58 crc kubenswrapper[4789]: I1124 11:56:58.385043 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4833b4b-25fe-4457-bb87-72efdfe17034-kube-api-access-g5kqd" (OuterVolumeSpecName: "kube-api-access-g5kqd") pod "f4833b4b-25fe-4457-bb87-72efdfe17034" (UID: "f4833b4b-25fe-4457-bb87-72efdfe17034"). InnerVolumeSpecName "kube-api-access-g5kqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:56:58 crc kubenswrapper[4789]: I1124 11:56:58.405833 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4833b4b-25fe-4457-bb87-72efdfe17034-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f4833b4b-25fe-4457-bb87-72efdfe17034" (UID: "f4833b4b-25fe-4457-bb87-72efdfe17034"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:58 crc kubenswrapper[4789]: I1124 11:56:58.406775 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4833b4b-25fe-4457-bb87-72efdfe17034-inventory" (OuterVolumeSpecName: "inventory") pod "f4833b4b-25fe-4457-bb87-72efdfe17034" (UID: "f4833b4b-25fe-4457-bb87-72efdfe17034"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:56:58 crc kubenswrapper[4789]: I1124 11:56:58.476968 4789 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f4833b4b-25fe-4457-bb87-72efdfe17034-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:58 crc kubenswrapper[4789]: I1124 11:56:58.477004 4789 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4833b4b-25fe-4457-bb87-72efdfe17034-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:58 crc kubenswrapper[4789]: I1124 11:56:58.477015 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5kqd\" (UniqueName: \"kubernetes.io/projected/f4833b4b-25fe-4457-bb87-72efdfe17034-kube-api-access-g5kqd\") on node \"crc\" DevicePath \"\"" Nov 24 11:56:58 crc kubenswrapper[4789]: I1124 11:56:58.868190 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-djqbm" event={"ID":"f4833b4b-25fe-4457-bb87-72efdfe17034","Type":"ContainerDied","Data":"84e69928db655cac54be8a4faee35d812bcf6971f34cf9e0879e06a21fca19ad"} Nov 24 11:56:58 crc kubenswrapper[4789]: I1124 11:56:58.868243 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84e69928db655cac54be8a4faee35d812bcf6971f34cf9e0879e06a21fca19ad" Nov 24 11:56:58 crc kubenswrapper[4789]: I1124 11:56:58.868301 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-djqbm" Nov 24 11:56:58 crc kubenswrapper[4789]: I1124 11:56:58.968650 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zh7m4"] Nov 24 11:56:58 crc kubenswrapper[4789]: E1124 11:56:58.969055 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4833b4b-25fe-4457-bb87-72efdfe17034" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:56:58 crc kubenswrapper[4789]: I1124 11:56:58.969075 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4833b4b-25fe-4457-bb87-72efdfe17034" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:56:58 crc kubenswrapper[4789]: I1124 11:56:58.969498 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4833b4b-25fe-4457-bb87-72efdfe17034" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:56:58 crc kubenswrapper[4789]: I1124 11:56:58.970632 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zh7m4" Nov 24 11:56:58 crc kubenswrapper[4789]: I1124 11:56:58.973829 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:56:58 crc kubenswrapper[4789]: I1124 11:56:58.974608 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:56:58 crc kubenswrapper[4789]: I1124 11:56:58.978977 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lhfjg" Nov 24 11:56:58 crc kubenswrapper[4789]: I1124 11:56:58.979015 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:56:58 crc kubenswrapper[4789]: I1124 11:56:58.979976 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zh7m4"] Nov 24 11:56:59 crc kubenswrapper[4789]: I1124 11:56:59.086458 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ad8a5468-ca7f-4a4e-a436-068f8f1256c3-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zh7m4\" (UID: \"ad8a5468-ca7f-4a4e-a436-068f8f1256c3\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zh7m4" Nov 24 11:56:59 crc kubenswrapper[4789]: I1124 11:56:59.086721 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad8a5468-ca7f-4a4e-a436-068f8f1256c3-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zh7m4\" (UID: \"ad8a5468-ca7f-4a4e-a436-068f8f1256c3\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zh7m4" Nov 24 11:56:59 crc kubenswrapper[4789]: I1124 11:56:59.086915 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr4qg\" (UniqueName: \"kubernetes.io/projected/ad8a5468-ca7f-4a4e-a436-068f8f1256c3-kube-api-access-xr4qg\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zh7m4\" (UID: \"ad8a5468-ca7f-4a4e-a436-068f8f1256c3\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zh7m4" Nov 24 11:56:59 crc kubenswrapper[4789]: I1124 11:56:59.188186 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ad8a5468-ca7f-4a4e-a436-068f8f1256c3-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zh7m4\" (UID: \"ad8a5468-ca7f-4a4e-a436-068f8f1256c3\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zh7m4" Nov 24 11:56:59 crc kubenswrapper[4789]: I1124 11:56:59.188267 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad8a5468-ca7f-4a4e-a436-068f8f1256c3-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zh7m4\" (UID: \"ad8a5468-ca7f-4a4e-a436-068f8f1256c3\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zh7m4" Nov 24 11:56:59 crc kubenswrapper[4789]: I1124 11:56:59.188298 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr4qg\" (UniqueName: \"kubernetes.io/projected/ad8a5468-ca7f-4a4e-a436-068f8f1256c3-kube-api-access-xr4qg\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zh7m4\" (UID: \"ad8a5468-ca7f-4a4e-a436-068f8f1256c3\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zh7m4" Nov 24 11:56:59 crc kubenswrapper[4789]: I1124 11:56:59.196178 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad8a5468-ca7f-4a4e-a436-068f8f1256c3-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zh7m4\" (UID: \"ad8a5468-ca7f-4a4e-a436-068f8f1256c3\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zh7m4" Nov 24 11:56:59 crc kubenswrapper[4789]: I1124 11:56:59.196873 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ad8a5468-ca7f-4a4e-a436-068f8f1256c3-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zh7m4\" (UID: \"ad8a5468-ca7f-4a4e-a436-068f8f1256c3\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zh7m4" Nov 24 11:56:59 crc kubenswrapper[4789]: I1124 11:56:59.210679 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr4qg\" (UniqueName: \"kubernetes.io/projected/ad8a5468-ca7f-4a4e-a436-068f8f1256c3-kube-api-access-xr4qg\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zh7m4\" (UID: \"ad8a5468-ca7f-4a4e-a436-068f8f1256c3\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zh7m4" Nov 24 11:56:59 crc kubenswrapper[4789]: I1124 11:56:59.310693 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zh7m4" Nov 24 11:56:59 crc kubenswrapper[4789]: I1124 11:56:59.866778 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zh7m4"] Nov 24 11:56:59 crc kubenswrapper[4789]: W1124 11:56:59.871390 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad8a5468_ca7f_4a4e_a436_068f8f1256c3.slice/crio-6667cccefcb935eac3704cf72a91688ab2f39751b37451d1572bb717e78eac3e WatchSource:0}: Error finding container 6667cccefcb935eac3704cf72a91688ab2f39751b37451d1572bb717e78eac3e: Status 404 returned error can't find the container with id 6667cccefcb935eac3704cf72a91688ab2f39751b37451d1572bb717e78eac3e Nov 24 11:57:00 crc kubenswrapper[4789]: I1124 11:57:00.900651 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zh7m4" event={"ID":"ad8a5468-ca7f-4a4e-a436-068f8f1256c3","Type":"ContainerStarted","Data":"680c136c5fde0ab4d625332603b7ee173ffa0eac761b44917bee05de91359ffe"} Nov 24 11:57:00 crc kubenswrapper[4789]: I1124 11:57:00.900696 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zh7m4" event={"ID":"ad8a5468-ca7f-4a4e-a436-068f8f1256c3","Type":"ContainerStarted","Data":"6667cccefcb935eac3704cf72a91688ab2f39751b37451d1572bb717e78eac3e"} Nov 24 11:57:00 crc kubenswrapper[4789]: I1124 11:57:00.932912 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zh7m4" podStartSLOduration=2.479024234 podStartE2EDuration="2.932893419s" podCreationTimestamp="2025-11-24 11:56:58 +0000 UTC" firstStartedPulling="2025-11-24 11:56:59.878264487 +0000 UTC m=+1602.460735866" lastFinishedPulling="2025-11-24 11:57:00.332133662 +0000 UTC m=+1602.914605051" observedRunningTime="2025-11-24 11:57:00.932025718 +0000 UTC m=+1603.514497097" watchObservedRunningTime="2025-11-24 11:57:00.932893419 +0000 UTC m=+1603.515364818" Nov 24 11:57:03 crc kubenswrapper[4789]: I1124 11:57:03.034531 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-fwr22"] Nov 24 11:57:03 crc kubenswrapper[4789]: I1124 11:57:03.051251 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-fwr22"] Nov 24 11:57:04 crc kubenswrapper[4789]: I1124 11:57:04.180609 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0c3fb8f-0aab-4e51-bfa0-50e905479f77" path="/var/lib/kubelet/pods/b0c3fb8f-0aab-4e51-bfa0-50e905479f77/volumes" Nov 24 11:57:04 crc kubenswrapper[4789]: I1124 11:57:04.938979 4789 generic.go:334] "Generic (PLEG): container finished" podID="ad8a5468-ca7f-4a4e-a436-068f8f1256c3" containerID="680c136c5fde0ab4d625332603b7ee173ffa0eac761b44917bee05de91359ffe" exitCode=0 Nov 24 11:57:04 crc kubenswrapper[4789]: I1124 11:57:04.939064 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zh7m4" event={"ID":"ad8a5468-ca7f-4a4e-a436-068f8f1256c3","Type":"ContainerDied","Data":"680c136c5fde0ab4d625332603b7ee173ffa0eac761b44917bee05de91359ffe"} Nov 24 11:57:06 crc kubenswrapper[4789]: I1124 11:57:06.366696 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zh7m4" Nov 24 11:57:06 crc kubenswrapper[4789]: I1124 11:57:06.545507 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ad8a5468-ca7f-4a4e-a436-068f8f1256c3-ssh-key\") pod \"ad8a5468-ca7f-4a4e-a436-068f8f1256c3\" (UID: \"ad8a5468-ca7f-4a4e-a436-068f8f1256c3\") " Nov 24 11:57:06 crc kubenswrapper[4789]: I1124 11:57:06.545586 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xr4qg\" (UniqueName: \"kubernetes.io/projected/ad8a5468-ca7f-4a4e-a436-068f8f1256c3-kube-api-access-xr4qg\") pod \"ad8a5468-ca7f-4a4e-a436-068f8f1256c3\" (UID: \"ad8a5468-ca7f-4a4e-a436-068f8f1256c3\") " Nov 24 11:57:06 crc kubenswrapper[4789]: I1124 11:57:06.545696 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad8a5468-ca7f-4a4e-a436-068f8f1256c3-inventory\") pod \"ad8a5468-ca7f-4a4e-a436-068f8f1256c3\" (UID: \"ad8a5468-ca7f-4a4e-a436-068f8f1256c3\") " Nov 24 11:57:06 crc kubenswrapper[4789]: I1124 11:57:06.551693 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad8a5468-ca7f-4a4e-a436-068f8f1256c3-kube-api-access-xr4qg" (OuterVolumeSpecName: "kube-api-access-xr4qg") pod "ad8a5468-ca7f-4a4e-a436-068f8f1256c3" (UID: "ad8a5468-ca7f-4a4e-a436-068f8f1256c3"). InnerVolumeSpecName "kube-api-access-xr4qg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:57:06 crc kubenswrapper[4789]: I1124 11:57:06.578069 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad8a5468-ca7f-4a4e-a436-068f8f1256c3-inventory" (OuterVolumeSpecName: "inventory") pod "ad8a5468-ca7f-4a4e-a436-068f8f1256c3" (UID: "ad8a5468-ca7f-4a4e-a436-068f8f1256c3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:57:06 crc kubenswrapper[4789]: I1124 11:57:06.587694 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad8a5468-ca7f-4a4e-a436-068f8f1256c3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ad8a5468-ca7f-4a4e-a436-068f8f1256c3" (UID: "ad8a5468-ca7f-4a4e-a436-068f8f1256c3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:57:06 crc kubenswrapper[4789]: I1124 11:57:06.647307 4789 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad8a5468-ca7f-4a4e-a436-068f8f1256c3-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:57:06 crc kubenswrapper[4789]: I1124 11:57:06.647342 4789 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ad8a5468-ca7f-4a4e-a436-068f8f1256c3-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:57:06 crc kubenswrapper[4789]: I1124 11:57:06.647354 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xr4qg\" (UniqueName: \"kubernetes.io/projected/ad8a5468-ca7f-4a4e-a436-068f8f1256c3-kube-api-access-xr4qg\") on node \"crc\" DevicePath \"\"" Nov 24 11:57:06 crc kubenswrapper[4789]: I1124 11:57:06.970019 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zh7m4" event={"ID":"ad8a5468-ca7f-4a4e-a436-068f8f1256c3","Type":"ContainerDied","Data":"6667cccefcb935eac3704cf72a91688ab2f39751b37451d1572bb717e78eac3e"} Nov 24 11:57:06 crc kubenswrapper[4789]: I1124 11:57:06.970388 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6667cccefcb935eac3704cf72a91688ab2f39751b37451d1572bb717e78eac3e" Nov 24 11:57:06 crc kubenswrapper[4789]: I1124 11:57:06.970168 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zh7m4" Nov 24 11:57:07 crc kubenswrapper[4789]: I1124 11:57:07.048849 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nf9h6"] Nov 24 11:57:07 crc kubenswrapper[4789]: E1124 11:57:07.049192 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad8a5468-ca7f-4a4e-a436-068f8f1256c3" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 24 11:57:07 crc kubenswrapper[4789]: I1124 11:57:07.049208 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad8a5468-ca7f-4a4e-a436-068f8f1256c3" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 24 11:57:07 crc kubenswrapper[4789]: I1124 11:57:07.049398 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad8a5468-ca7f-4a4e-a436-068f8f1256c3" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 24 11:57:07 crc kubenswrapper[4789]: I1124 11:57:07.049979 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nf9h6" Nov 24 11:57:07 crc kubenswrapper[4789]: I1124 11:57:07.056055 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:57:07 crc kubenswrapper[4789]: I1124 11:57:07.056236 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:57:07 crc kubenswrapper[4789]: I1124 11:57:07.056278 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lhfjg" Nov 24 11:57:07 crc kubenswrapper[4789]: I1124 11:57:07.060775 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:57:07 crc kubenswrapper[4789]: I1124 11:57:07.067894 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nf9h6"] Nov 24 11:57:07 crc kubenswrapper[4789]: I1124 11:57:07.155940 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c81a61c-6108-4aa5-b0de-fb73115e28cf-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nf9h6\" (UID: \"0c81a61c-6108-4aa5-b0de-fb73115e28cf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nf9h6" Nov 24 11:57:07 crc kubenswrapper[4789]: I1124 11:57:07.156069 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0c81a61c-6108-4aa5-b0de-fb73115e28cf-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nf9h6\" (UID: \"0c81a61c-6108-4aa5-b0de-fb73115e28cf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nf9h6" Nov 24 11:57:07 crc kubenswrapper[4789]: I1124 11:57:07.156143 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42cqn\" (UniqueName: \"kubernetes.io/projected/0c81a61c-6108-4aa5-b0de-fb73115e28cf-kube-api-access-42cqn\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nf9h6\" (UID: \"0c81a61c-6108-4aa5-b0de-fb73115e28cf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nf9h6" Nov 24 11:57:07 crc kubenswrapper[4789]: I1124 11:57:07.170093 4789 scope.go:117] "RemoveContainer" containerID="35c18d54a6d963863f1131173b65be0814f48cc37a6950d4c230cb7fa15e65d4" Nov 24 11:57:07 crc kubenswrapper[4789]: E1124 11:57:07.170350 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 11:57:07 crc kubenswrapper[4789]: I1124 11:57:07.257914 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42cqn\" (UniqueName: \"kubernetes.io/projected/0c81a61c-6108-4aa5-b0de-fb73115e28cf-kube-api-access-42cqn\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nf9h6\" (UID: \"0c81a61c-6108-4aa5-b0de-fb73115e28cf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nf9h6" Nov 24 11:57:07 crc kubenswrapper[4789]: I1124 11:57:07.258074 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c81a61c-6108-4aa5-b0de-fb73115e28cf-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nf9h6\" (UID: \"0c81a61c-6108-4aa5-b0de-fb73115e28cf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nf9h6" Nov 24 11:57:07 crc kubenswrapper[4789]: I1124 11:57:07.258160 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0c81a61c-6108-4aa5-b0de-fb73115e28cf-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nf9h6\" (UID: \"0c81a61c-6108-4aa5-b0de-fb73115e28cf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nf9h6" Nov 24 11:57:07 crc kubenswrapper[4789]: I1124 11:57:07.272267 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c81a61c-6108-4aa5-b0de-fb73115e28cf-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nf9h6\" (UID: \"0c81a61c-6108-4aa5-b0de-fb73115e28cf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nf9h6" Nov 24 11:57:07 crc kubenswrapper[4789]: I1124 11:57:07.273625 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0c81a61c-6108-4aa5-b0de-fb73115e28cf-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nf9h6\" (UID: \"0c81a61c-6108-4aa5-b0de-fb73115e28cf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nf9h6" Nov 24 11:57:07 crc kubenswrapper[4789]: I1124 11:57:07.273896 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42cqn\" (UniqueName: \"kubernetes.io/projected/0c81a61c-6108-4aa5-b0de-fb73115e28cf-kube-api-access-42cqn\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nf9h6\" (UID: \"0c81a61c-6108-4aa5-b0de-fb73115e28cf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nf9h6" Nov 24 11:57:07 crc kubenswrapper[4789]: I1124 11:57:07.414157 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nf9h6" Nov 24 11:57:07 crc kubenswrapper[4789]: I1124 11:57:07.953972 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nf9h6"] Nov 24 11:57:07 crc kubenswrapper[4789]: I1124 11:57:07.985129 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nf9h6" event={"ID":"0c81a61c-6108-4aa5-b0de-fb73115e28cf","Type":"ContainerStarted","Data":"33dad11dd5f2e677ec583373881d0d81dd37561a7910f399223bf72b5af5e5eb"} Nov 24 11:57:08 crc kubenswrapper[4789]: I1124 11:57:08.033342 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-mvgg8"] Nov 24 11:57:08 crc kubenswrapper[4789]: I1124 11:57:08.044937 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-gn9zx"] Nov 24 11:57:08 crc kubenswrapper[4789]: I1124 11:57:08.055387 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-gn9zx"] Nov 24 11:57:08 crc kubenswrapper[4789]: I1124 11:57:08.061767 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-mvgg8"] Nov 24 11:57:08 crc kubenswrapper[4789]: I1124 11:57:08.187758 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad19529b-59a5-42f3-8adf-ba14978e1f8a" path="/var/lib/kubelet/pods/ad19529b-59a5-42f3-8adf-ba14978e1f8a/volumes" Nov 24 11:57:08 crc kubenswrapper[4789]: I1124 11:57:08.188806 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf547f01-0021-4f93-ae9b-a7afa5016c6a" path="/var/lib/kubelet/pods/bf547f01-0021-4f93-ae9b-a7afa5016c6a/volumes" Nov 24 11:57:08 crc kubenswrapper[4789]: I1124 11:57:08.992743 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nf9h6" event={"ID":"0c81a61c-6108-4aa5-b0de-fb73115e28cf","Type":"ContainerStarted","Data":"c28fba649b6395f5fc98ea26ce87b78bbfd8133b19cc816393931569b17d80db"} Nov 24 11:57:09 crc kubenswrapper[4789]: I1124 11:57:09.020352 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nf9h6" podStartSLOduration=1.5498939040000002 podStartE2EDuration="2.0203322s" podCreationTimestamp="2025-11-24 11:57:07 +0000 UTC" firstStartedPulling="2025-11-24 11:57:07.966807945 +0000 UTC m=+1610.549279324" lastFinishedPulling="2025-11-24 11:57:08.437246201 +0000 UTC m=+1611.019717620" observedRunningTime="2025-11-24 11:57:09.009710624 +0000 UTC m=+1611.592182013" watchObservedRunningTime="2025-11-24 11:57:09.0203322 +0000 UTC m=+1611.602803579" Nov 24 11:57:19 crc kubenswrapper[4789]: I1124 11:57:19.042140 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-msb22"] Nov 24 11:57:19 crc kubenswrapper[4789]: I1124 11:57:19.048925 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-msb22"] Nov 24 11:57:19 crc kubenswrapper[4789]: I1124 11:57:19.168852 4789 scope.go:117] "RemoveContainer" containerID="35c18d54a6d963863f1131173b65be0814f48cc37a6950d4c230cb7fa15e65d4" Nov 24 11:57:19 crc kubenswrapper[4789]: E1124 11:57:19.169095 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 11:57:20 crc kubenswrapper[4789]: I1124 11:57:20.191833 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e41ad3b-8d25-49db-8c15-4a3a57f47e2f" path="/var/lib/kubelet/pods/2e41ad3b-8d25-49db-8c15-4a3a57f47e2f/volumes" Nov 24 11:57:30 crc kubenswrapper[4789]: I1124 11:57:30.169414 4789 scope.go:117] "RemoveContainer" containerID="35c18d54a6d963863f1131173b65be0814f48cc37a6950d4c230cb7fa15e65d4" Nov 24 11:57:30 crc kubenswrapper[4789]: E1124 11:57:30.170187 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 11:57:32 crc kubenswrapper[4789]: I1124 11:57:32.895946 4789 scope.go:117] "RemoveContainer" containerID="fc9dbd6cb35e285eb0ce34d2a43fff15cbd426d38903163e2a96ce8b8b9c011c" Nov 24 11:57:32 crc kubenswrapper[4789]: I1124 11:57:32.945865 4789 scope.go:117] "RemoveContainer" containerID="3a5f734314a67825f0218cc23490a22234ef30c531f146bda5b5f972ce330a55" Nov 24 11:57:32 crc kubenswrapper[4789]: I1124 11:57:32.997289 4789 scope.go:117] "RemoveContainer" containerID="ab1c66e1538c230613aada80e9b75be0d893f252c28efd5e97e10f7f2eb347ce" Nov 24 11:57:33 crc kubenswrapper[4789]: I1124 11:57:33.030179 4789 scope.go:117] "RemoveContainer" containerID="09fb29e690ccc7728a0c2f511a01dc0f0121b504660df2b743b4b84795e8fd8b" Nov 24 11:57:33 crc kubenswrapper[4789]: I1124 11:57:33.090305 4789 scope.go:117] "RemoveContainer" containerID="326d01aed54a27faad41244ea6c18159d3da2e453337a0d01eff0fbbb474da84" Nov 24 11:57:45 crc kubenswrapper[4789]: I1124 11:57:45.169513 4789 scope.go:117] "RemoveContainer" containerID="35c18d54a6d963863f1131173b65be0814f48cc37a6950d4c230cb7fa15e65d4" Nov 24 11:57:45 crc kubenswrapper[4789]: E1124 11:57:45.170452 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 11:57:54 crc kubenswrapper[4789]: I1124 11:57:54.068910 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-dvvnv"] Nov 24 11:57:54 crc kubenswrapper[4789]: I1124 11:57:54.084763 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-8dmfq"] Nov 24 11:57:54 crc kubenswrapper[4789]: I1124 11:57:54.095313 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-dvvnv"] Nov 24 11:57:54 crc kubenswrapper[4789]: I1124 11:57:54.103877 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-8dmfq"] Nov 24 11:57:54 crc kubenswrapper[4789]: I1124 11:57:54.182537 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a6b15a6-5d09-4cb0-ab4e-bd69b568c5e0" path="/var/lib/kubelet/pods/3a6b15a6-5d09-4cb0-ab4e-bd69b568c5e0/volumes" Nov 24 11:57:54 crc kubenswrapper[4789]: I1124 11:57:54.183810 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6d7d899-a03a-4029-8316-b8388df47987" path="/var/lib/kubelet/pods/e6d7d899-a03a-4029-8316-b8388df47987/volumes" Nov 24 11:57:55 crc kubenswrapper[4789]: I1124 11:57:55.057089 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-8356-account-create-q8xxw"] Nov 24 11:57:55 crc kubenswrapper[4789]: I1124 11:57:55.067359 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-c151-account-create-2lvng"] Nov 24 11:57:55 crc kubenswrapper[4789]: I1124 11:57:55.077740 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-6ae9-account-create-nzsmf"] Nov 24 11:57:55 crc kubenswrapper[4789]: I1124 11:57:55.087625 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-dj46k"] Nov 24 11:57:55 crc kubenswrapper[4789]: I1124 11:57:55.096927 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-dj46k"] Nov 24 11:57:55 crc kubenswrapper[4789]: I1124 11:57:55.106251 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-8356-account-create-q8xxw"] Nov 24 11:57:55 crc kubenswrapper[4789]: I1124 11:57:55.113950 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-c151-account-create-2lvng"] Nov 24 11:57:55 crc kubenswrapper[4789]: I1124 11:57:55.122119 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-6ae9-account-create-nzsmf"] Nov 24 11:57:56 crc kubenswrapper[4789]: I1124 11:57:56.184141 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad86d851-a1c9-47b6-9f94-28176e2c1e85" path="/var/lib/kubelet/pods/ad86d851-a1c9-47b6-9f94-28176e2c1e85/volumes" Nov 24 11:57:56 crc kubenswrapper[4789]: I1124 11:57:56.186132 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bac1f2fc-bf4e-4b73-b0b4-433b3b38e333" path="/var/lib/kubelet/pods/bac1f2fc-bf4e-4b73-b0b4-433b3b38e333/volumes" Nov 24 11:57:56 crc kubenswrapper[4789]: I1124 11:57:56.187496 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e702aaf2-e5aa-43ca-a668-c743d706ab47" path="/var/lib/kubelet/pods/e702aaf2-e5aa-43ca-a668-c743d706ab47/volumes" Nov 24 11:57:56 crc kubenswrapper[4789]: I1124 11:57:56.188876 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec55afb7-d18a-449e-b32b-859da8cb7d47" path="/var/lib/kubelet/pods/ec55afb7-d18a-449e-b32b-859da8cb7d47/volumes" Nov 24 11:57:59 crc kubenswrapper[4789]: I1124 11:57:59.169268 4789 scope.go:117] "RemoveContainer" containerID="35c18d54a6d963863f1131173b65be0814f48cc37a6950d4c230cb7fa15e65d4" Nov 24 11:57:59 crc kubenswrapper[4789]: E1124 11:57:59.169796 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 11:58:10 crc kubenswrapper[4789]: I1124 11:58:10.546907 4789 generic.go:334] "Generic (PLEG): container finished" podID="0c81a61c-6108-4aa5-b0de-fb73115e28cf" containerID="c28fba649b6395f5fc98ea26ce87b78bbfd8133b19cc816393931569b17d80db" exitCode=0 Nov 24 11:58:10 crc kubenswrapper[4789]: I1124 11:58:10.546986 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nf9h6" event={"ID":"0c81a61c-6108-4aa5-b0de-fb73115e28cf","Type":"ContainerDied","Data":"c28fba649b6395f5fc98ea26ce87b78bbfd8133b19cc816393931569b17d80db"} Nov 24 11:58:11 crc kubenswrapper[4789]: I1124 11:58:11.977818 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nf9h6" Nov 24 11:58:12 crc kubenswrapper[4789]: I1124 11:58:12.065216 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c81a61c-6108-4aa5-b0de-fb73115e28cf-inventory\") pod \"0c81a61c-6108-4aa5-b0de-fb73115e28cf\" (UID: \"0c81a61c-6108-4aa5-b0de-fb73115e28cf\") " Nov 24 11:58:12 crc kubenswrapper[4789]: I1124 11:58:12.065300 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42cqn\" (UniqueName: \"kubernetes.io/projected/0c81a61c-6108-4aa5-b0de-fb73115e28cf-kube-api-access-42cqn\") pod \"0c81a61c-6108-4aa5-b0de-fb73115e28cf\" (UID: \"0c81a61c-6108-4aa5-b0de-fb73115e28cf\") " Nov 24 11:58:12 crc kubenswrapper[4789]: I1124 11:58:12.065433 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0c81a61c-6108-4aa5-b0de-fb73115e28cf-ssh-key\") pod \"0c81a61c-6108-4aa5-b0de-fb73115e28cf\" (UID: \"0c81a61c-6108-4aa5-b0de-fb73115e28cf\") " Nov 24 11:58:12 crc kubenswrapper[4789]: I1124 11:58:12.077999 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c81a61c-6108-4aa5-b0de-fb73115e28cf-kube-api-access-42cqn" (OuterVolumeSpecName: "kube-api-access-42cqn") pod "0c81a61c-6108-4aa5-b0de-fb73115e28cf" (UID: "0c81a61c-6108-4aa5-b0de-fb73115e28cf"). InnerVolumeSpecName "kube-api-access-42cqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:58:12 crc kubenswrapper[4789]: I1124 11:58:12.090869 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c81a61c-6108-4aa5-b0de-fb73115e28cf-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0c81a61c-6108-4aa5-b0de-fb73115e28cf" (UID: "0c81a61c-6108-4aa5-b0de-fb73115e28cf"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:58:12 crc kubenswrapper[4789]: I1124 11:58:12.092532 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c81a61c-6108-4aa5-b0de-fb73115e28cf-inventory" (OuterVolumeSpecName: "inventory") pod "0c81a61c-6108-4aa5-b0de-fb73115e28cf" (UID: "0c81a61c-6108-4aa5-b0de-fb73115e28cf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:58:12 crc kubenswrapper[4789]: I1124 11:58:12.168451 4789 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0c81a61c-6108-4aa5-b0de-fb73115e28cf-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:12 crc kubenswrapper[4789]: I1124 11:58:12.168601 4789 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c81a61c-6108-4aa5-b0de-fb73115e28cf-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:12 crc kubenswrapper[4789]: I1124 11:58:12.168625 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42cqn\" (UniqueName: \"kubernetes.io/projected/0c81a61c-6108-4aa5-b0de-fb73115e28cf-kube-api-access-42cqn\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:12 crc kubenswrapper[4789]: I1124 11:58:12.169601 4789 scope.go:117] "RemoveContainer" containerID="35c18d54a6d963863f1131173b65be0814f48cc37a6950d4c230cb7fa15e65d4" Nov 24 11:58:12 crc kubenswrapper[4789]: E1124 11:58:12.169843 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 11:58:12 crc kubenswrapper[4789]: I1124 11:58:12.565171 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nf9h6" event={"ID":"0c81a61c-6108-4aa5-b0de-fb73115e28cf","Type":"ContainerDied","Data":"33dad11dd5f2e677ec583373881d0d81dd37561a7910f399223bf72b5af5e5eb"} Nov 24 11:58:12 crc kubenswrapper[4789]: I1124 11:58:12.565552 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33dad11dd5f2e677ec583373881d0d81dd37561a7910f399223bf72b5af5e5eb" Nov 24 11:58:12 crc kubenswrapper[4789]: I1124 11:58:12.565323 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nf9h6" Nov 24 11:58:12 crc kubenswrapper[4789]: I1124 11:58:12.657498 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-59qn5"] Nov 24 11:58:12 crc kubenswrapper[4789]: E1124 11:58:12.657927 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c81a61c-6108-4aa5-b0de-fb73115e28cf" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:58:12 crc kubenswrapper[4789]: I1124 11:58:12.657955 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c81a61c-6108-4aa5-b0de-fb73115e28cf" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:58:12 crc kubenswrapper[4789]: I1124 11:58:12.658201 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c81a61c-6108-4aa5-b0de-fb73115e28cf" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:58:12 crc kubenswrapper[4789]: I1124 11:58:12.659066 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-59qn5" Nov 24 11:58:12 crc kubenswrapper[4789]: I1124 11:58:12.661661 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:58:12 crc kubenswrapper[4789]: I1124 11:58:12.662579 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:58:12 crc kubenswrapper[4789]: I1124 11:58:12.662894 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lhfjg" Nov 24 11:58:12 crc kubenswrapper[4789]: I1124 11:58:12.663295 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:58:12 crc kubenswrapper[4789]: I1124 11:58:12.683556 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-59qn5"] Nov 24 11:58:12 crc kubenswrapper[4789]: I1124 11:58:12.779613 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a71628fe-aed3-4023-b18c-8b89d60fabac-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-59qn5\" (UID: \"a71628fe-aed3-4023-b18c-8b89d60fabac\") " pod="openstack/ssh-known-hosts-edpm-deployment-59qn5" Nov 24 11:58:12 crc kubenswrapper[4789]: I1124 11:58:12.779705 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a71628fe-aed3-4023-b18c-8b89d60fabac-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-59qn5\" (UID: \"a71628fe-aed3-4023-b18c-8b89d60fabac\") " pod="openstack/ssh-known-hosts-edpm-deployment-59qn5" Nov 24 11:58:12 crc kubenswrapper[4789]: I1124 11:58:12.779762 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gkwj\" (UniqueName: \"kubernetes.io/projected/a71628fe-aed3-4023-b18c-8b89d60fabac-kube-api-access-6gkwj\") pod \"ssh-known-hosts-edpm-deployment-59qn5\" (UID: \"a71628fe-aed3-4023-b18c-8b89d60fabac\") " pod="openstack/ssh-known-hosts-edpm-deployment-59qn5" Nov 24 11:58:12 crc kubenswrapper[4789]: I1124 11:58:12.882008 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a71628fe-aed3-4023-b18c-8b89d60fabac-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-59qn5\" (UID: \"a71628fe-aed3-4023-b18c-8b89d60fabac\") " pod="openstack/ssh-known-hosts-edpm-deployment-59qn5" Nov 24 11:58:12 crc kubenswrapper[4789]: I1124 11:58:12.882080 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a71628fe-aed3-4023-b18c-8b89d60fabac-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-59qn5\" (UID: \"a71628fe-aed3-4023-b18c-8b89d60fabac\") " pod="openstack/ssh-known-hosts-edpm-deployment-59qn5" Nov 24 11:58:12 crc kubenswrapper[4789]: I1124 11:58:12.882124 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gkwj\" (UniqueName: \"kubernetes.io/projected/a71628fe-aed3-4023-b18c-8b89d60fabac-kube-api-access-6gkwj\") pod \"ssh-known-hosts-edpm-deployment-59qn5\" (UID: \"a71628fe-aed3-4023-b18c-8b89d60fabac\") " pod="openstack/ssh-known-hosts-edpm-deployment-59qn5" Nov 24 11:58:12 crc kubenswrapper[4789]: I1124 11:58:12.889404 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a71628fe-aed3-4023-b18c-8b89d60fabac-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-59qn5\" (UID: \"a71628fe-aed3-4023-b18c-8b89d60fabac\") " pod="openstack/ssh-known-hosts-edpm-deployment-59qn5" Nov 24 11:58:12 crc kubenswrapper[4789]: I1124 11:58:12.895272 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a71628fe-aed3-4023-b18c-8b89d60fabac-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-59qn5\" (UID: \"a71628fe-aed3-4023-b18c-8b89d60fabac\") " pod="openstack/ssh-known-hosts-edpm-deployment-59qn5" Nov 24 11:58:12 crc kubenswrapper[4789]: I1124 11:58:12.904063 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gkwj\" (UniqueName: \"kubernetes.io/projected/a71628fe-aed3-4023-b18c-8b89d60fabac-kube-api-access-6gkwj\") pod \"ssh-known-hosts-edpm-deployment-59qn5\" (UID: \"a71628fe-aed3-4023-b18c-8b89d60fabac\") " pod="openstack/ssh-known-hosts-edpm-deployment-59qn5" Nov 24 11:58:12 crc kubenswrapper[4789]: I1124 11:58:12.976338 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-59qn5" Nov 24 11:58:13 crc kubenswrapper[4789]: I1124 11:58:13.535794 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-59qn5"] Nov 24 11:58:13 crc kubenswrapper[4789]: I1124 11:58:13.576452 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-59qn5" event={"ID":"a71628fe-aed3-4023-b18c-8b89d60fabac","Type":"ContainerStarted","Data":"c4b4020d380d7133a668e3ef9442d0089a54ba668553f6f2af07a77aac407cb7"} Nov 24 11:58:14 crc kubenswrapper[4789]: I1124 11:58:14.597063 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-59qn5" event={"ID":"a71628fe-aed3-4023-b18c-8b89d60fabac","Type":"ContainerStarted","Data":"7ed0a88b2fc952e2f43ad9eeecf09c3014c07be3716ec7fb71e24b7cda06d4e3"} Nov 24 11:58:14 crc kubenswrapper[4789]: I1124 11:58:14.621127 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-59qn5" podStartSLOduration=1.977483096 podStartE2EDuration="2.6211045s" podCreationTimestamp="2025-11-24 11:58:12 +0000 UTC" firstStartedPulling="2025-11-24 11:58:13.548259418 +0000 UTC m=+1676.130730797" lastFinishedPulling="2025-11-24 11:58:14.191880812 +0000 UTC m=+1676.774352201" observedRunningTime="2025-11-24 11:58:14.614365658 +0000 UTC m=+1677.196837047" watchObservedRunningTime="2025-11-24 11:58:14.6211045 +0000 UTC m=+1677.203575879" Nov 24 11:58:20 crc kubenswrapper[4789]: I1124 11:58:20.042834 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-r7v4g"] Nov 24 11:58:20 crc kubenswrapper[4789]: I1124 11:58:20.054116 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-r7v4g"] Nov 24 11:58:20 crc kubenswrapper[4789]: I1124 11:58:20.179569 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee3fcc6a-5c80-4e48-819d-defc5053b969" path="/var/lib/kubelet/pods/ee3fcc6a-5c80-4e48-819d-defc5053b969/volumes" Nov 24 11:58:23 crc kubenswrapper[4789]: I1124 11:58:23.667180 4789 generic.go:334] "Generic (PLEG): container finished" podID="a71628fe-aed3-4023-b18c-8b89d60fabac" containerID="7ed0a88b2fc952e2f43ad9eeecf09c3014c07be3716ec7fb71e24b7cda06d4e3" exitCode=0 Nov 24 11:58:23 crc kubenswrapper[4789]: I1124 11:58:23.667269 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-59qn5" event={"ID":"a71628fe-aed3-4023-b18c-8b89d60fabac","Type":"ContainerDied","Data":"7ed0a88b2fc952e2f43ad9eeecf09c3014c07be3716ec7fb71e24b7cda06d4e3"} Nov 24 11:58:25 crc kubenswrapper[4789]: I1124 11:58:25.124796 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-59qn5" Nov 24 11:58:25 crc kubenswrapper[4789]: I1124 11:58:25.169541 4789 scope.go:117] "RemoveContainer" containerID="35c18d54a6d963863f1131173b65be0814f48cc37a6950d4c230cb7fa15e65d4" Nov 24 11:58:25 crc kubenswrapper[4789]: E1124 11:58:25.169928 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 11:58:25 crc kubenswrapper[4789]: I1124 11:58:25.215873 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gkwj\" (UniqueName: \"kubernetes.io/projected/a71628fe-aed3-4023-b18c-8b89d60fabac-kube-api-access-6gkwj\") pod \"a71628fe-aed3-4023-b18c-8b89d60fabac\" (UID: \"a71628fe-aed3-4023-b18c-8b89d60fabac\") " Nov 24 11:58:25 crc kubenswrapper[4789]: I1124 11:58:25.216017 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a71628fe-aed3-4023-b18c-8b89d60fabac-ssh-key-openstack-edpm-ipam\") pod \"a71628fe-aed3-4023-b18c-8b89d60fabac\" (UID: \"a71628fe-aed3-4023-b18c-8b89d60fabac\") " Nov 24 11:58:25 crc kubenswrapper[4789]: I1124 11:58:25.244738 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a71628fe-aed3-4023-b18c-8b89d60fabac-kube-api-access-6gkwj" (OuterVolumeSpecName: "kube-api-access-6gkwj") pod "a71628fe-aed3-4023-b18c-8b89d60fabac" (UID: "a71628fe-aed3-4023-b18c-8b89d60fabac"). InnerVolumeSpecName "kube-api-access-6gkwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:58:25 crc kubenswrapper[4789]: I1124 11:58:25.254118 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a71628fe-aed3-4023-b18c-8b89d60fabac-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a71628fe-aed3-4023-b18c-8b89d60fabac" (UID: "a71628fe-aed3-4023-b18c-8b89d60fabac"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:58:25 crc kubenswrapper[4789]: I1124 11:58:25.322296 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a71628fe-aed3-4023-b18c-8b89d60fabac-inventory-0\") pod \"a71628fe-aed3-4023-b18c-8b89d60fabac\" (UID: \"a71628fe-aed3-4023-b18c-8b89d60fabac\") " Nov 24 11:58:25 crc kubenswrapper[4789]: I1124 11:58:25.322903 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gkwj\" (UniqueName: \"kubernetes.io/projected/a71628fe-aed3-4023-b18c-8b89d60fabac-kube-api-access-6gkwj\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:25 crc kubenswrapper[4789]: I1124 11:58:25.322922 4789 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a71628fe-aed3-4023-b18c-8b89d60fabac-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:25 crc kubenswrapper[4789]: I1124 11:58:25.355511 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a71628fe-aed3-4023-b18c-8b89d60fabac-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "a71628fe-aed3-4023-b18c-8b89d60fabac" (UID: "a71628fe-aed3-4023-b18c-8b89d60fabac"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:58:25 crc kubenswrapper[4789]: I1124 11:58:25.424901 4789 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a71628fe-aed3-4023-b18c-8b89d60fabac-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:25 crc kubenswrapper[4789]: I1124 11:58:25.685915 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-59qn5" event={"ID":"a71628fe-aed3-4023-b18c-8b89d60fabac","Type":"ContainerDied","Data":"c4b4020d380d7133a668e3ef9442d0089a54ba668553f6f2af07a77aac407cb7"} Nov 24 11:58:25 crc kubenswrapper[4789]: I1124 11:58:25.685963 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4b4020d380d7133a668e3ef9442d0089a54ba668553f6f2af07a77aac407cb7" Nov 24 11:58:25 crc kubenswrapper[4789]: I1124 11:58:25.686138 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-59qn5" Nov 24 11:58:25 crc kubenswrapper[4789]: I1124 11:58:25.795077 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-tkb8q"] Nov 24 11:58:25 crc kubenswrapper[4789]: E1124 11:58:25.795660 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a71628fe-aed3-4023-b18c-8b89d60fabac" containerName="ssh-known-hosts-edpm-deployment" Nov 24 11:58:25 crc kubenswrapper[4789]: I1124 11:58:25.795684 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="a71628fe-aed3-4023-b18c-8b89d60fabac" containerName="ssh-known-hosts-edpm-deployment" Nov 24 11:58:25 crc kubenswrapper[4789]: I1124 11:58:25.795972 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="a71628fe-aed3-4023-b18c-8b89d60fabac" containerName="ssh-known-hosts-edpm-deployment" Nov 24 11:58:25 crc kubenswrapper[4789]: I1124 11:58:25.796805 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tkb8q" Nov 24 11:58:25 crc kubenswrapper[4789]: I1124 11:58:25.803859 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:58:25 crc kubenswrapper[4789]: I1124 11:58:25.804036 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lhfjg" Nov 24 11:58:25 crc kubenswrapper[4789]: I1124 11:58:25.803881 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:58:25 crc kubenswrapper[4789]: I1124 11:58:25.804221 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:58:25 crc kubenswrapper[4789]: I1124 11:58:25.810438 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-tkb8q"] Nov 24 11:58:25 crc kubenswrapper[4789]: I1124 11:58:25.834332 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75ad7df5-1344-4081-a222-62419ecefc52-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tkb8q\" (UID: \"75ad7df5-1344-4081-a222-62419ecefc52\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tkb8q" Nov 24 11:58:25 crc kubenswrapper[4789]: I1124 11:58:25.834448 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/75ad7df5-1344-4081-a222-62419ecefc52-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tkb8q\" (UID: \"75ad7df5-1344-4081-a222-62419ecefc52\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tkb8q" Nov 24 11:58:25 crc kubenswrapper[4789]: I1124 11:58:25.834957 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szjch\" (UniqueName: \"kubernetes.io/projected/75ad7df5-1344-4081-a222-62419ecefc52-kube-api-access-szjch\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tkb8q\" (UID: \"75ad7df5-1344-4081-a222-62419ecefc52\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tkb8q" Nov 24 11:58:25 crc kubenswrapper[4789]: I1124 11:58:25.936866 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szjch\" (UniqueName: \"kubernetes.io/projected/75ad7df5-1344-4081-a222-62419ecefc52-kube-api-access-szjch\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tkb8q\" (UID: \"75ad7df5-1344-4081-a222-62419ecefc52\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tkb8q" Nov 24 11:58:25 crc kubenswrapper[4789]: I1124 11:58:25.937422 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75ad7df5-1344-4081-a222-62419ecefc52-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tkb8q\" (UID: \"75ad7df5-1344-4081-a222-62419ecefc52\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tkb8q" Nov 24 11:58:25 crc kubenswrapper[4789]: I1124 11:58:25.938195 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/75ad7df5-1344-4081-a222-62419ecefc52-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tkb8q\" (UID: \"75ad7df5-1344-4081-a222-62419ecefc52\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tkb8q" Nov 24 11:58:25 crc kubenswrapper[4789]: I1124 11:58:25.941609 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75ad7df5-1344-4081-a222-62419ecefc52-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tkb8q\" (UID: \"75ad7df5-1344-4081-a222-62419ecefc52\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tkb8q" Nov 24 11:58:25 crc kubenswrapper[4789]: I1124 11:58:25.944019 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/75ad7df5-1344-4081-a222-62419ecefc52-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tkb8q\" (UID: \"75ad7df5-1344-4081-a222-62419ecefc52\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tkb8q" Nov 24 11:58:25 crc kubenswrapper[4789]: I1124 11:58:25.955500 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szjch\" (UniqueName: \"kubernetes.io/projected/75ad7df5-1344-4081-a222-62419ecefc52-kube-api-access-szjch\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tkb8q\" (UID: \"75ad7df5-1344-4081-a222-62419ecefc52\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tkb8q" Nov 24 11:58:26 crc kubenswrapper[4789]: I1124 11:58:26.125855 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tkb8q" Nov 24 11:58:26 crc kubenswrapper[4789]: I1124 11:58:26.516402 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-tkb8q"] Nov 24 11:58:26 crc kubenswrapper[4789]: I1124 11:58:26.695990 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tkb8q" event={"ID":"75ad7df5-1344-4081-a222-62419ecefc52","Type":"ContainerStarted","Data":"de3d5208c027a796a3b85c8e0f89c4ce207e9cdf67c83514aad99f7ef0db88eb"} Nov 24 11:58:27 crc kubenswrapper[4789]: I1124 11:58:27.707829 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tkb8q" event={"ID":"75ad7df5-1344-4081-a222-62419ecefc52","Type":"ContainerStarted","Data":"0e16ffafafeb8aeaba32f95a4b388b734e4a7e0d7b1d839c6dbebc5e0703fb21"} Nov 24 11:58:27 crc kubenswrapper[4789]: I1124 11:58:27.734516 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tkb8q" podStartSLOduration=1.97785915 podStartE2EDuration="2.734490544s" podCreationTimestamp="2025-11-24 11:58:25 +0000 UTC" firstStartedPulling="2025-11-24 11:58:26.554260067 +0000 UTC m=+1689.136731446" lastFinishedPulling="2025-11-24 11:58:27.310891461 +0000 UTC m=+1689.893362840" observedRunningTime="2025-11-24 11:58:27.724394363 +0000 UTC m=+1690.306865742" watchObservedRunningTime="2025-11-24 11:58:27.734490544 +0000 UTC m=+1690.316961923" Nov 24 11:58:33 crc kubenswrapper[4789]: I1124 11:58:33.216809 4789 scope.go:117] "RemoveContainer" containerID="5b8a9d6a9cb38c1d833a4e5b7a464144c861ff338e168661c05ac27df1331b7f" Nov 24 11:58:33 crc kubenswrapper[4789]: I1124 11:58:33.251056 4789 scope.go:117] "RemoveContainer" containerID="1fd4a3a5294bdb4788f22fa1547442f21a7fa5abd54ae995670fbbabbbd44473" Nov 24 11:58:33 crc kubenswrapper[4789]: I1124 11:58:33.301603 4789 scope.go:117] "RemoveContainer" containerID="5aa34a057cb5265feaeaacc2a45c1a5d548c12692ed9489542c07983a1e42832" Nov 24 11:58:33 crc kubenswrapper[4789]: I1124 11:58:33.365194 4789 scope.go:117] "RemoveContainer" containerID="3eaa98b25524096365625b6eba64b0bbd0efbcded1e676ef6926dfa22f0d7bab" Nov 24 11:58:33 crc kubenswrapper[4789]: I1124 11:58:33.413338 4789 scope.go:117] "RemoveContainer" containerID="f15f6a3409a556aabb2720c270e24a3a6184887bbe0ffdb8a499fc3d96887905" Nov 24 11:58:33 crc kubenswrapper[4789]: I1124 11:58:33.491216 4789 scope.go:117] "RemoveContainer" containerID="430999e109cfa99f065b4964caeaca483ae34c75c305b6d26a5ddb940a8b005a" Nov 24 11:58:33 crc kubenswrapper[4789]: I1124 11:58:33.514209 4789 scope.go:117] "RemoveContainer" containerID="4186b4087a8527398165c4386aa97342d1f4a32d3343dbf59ce2c3dd2b5e5b95" Nov 24 11:58:37 crc kubenswrapper[4789]: I1124 11:58:37.800343 4789 generic.go:334] "Generic (PLEG): container finished" podID="75ad7df5-1344-4081-a222-62419ecefc52" containerID="0e16ffafafeb8aeaba32f95a4b388b734e4a7e0d7b1d839c6dbebc5e0703fb21" exitCode=0 Nov 24 11:58:37 crc kubenswrapper[4789]: I1124 11:58:37.800437 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tkb8q" event={"ID":"75ad7df5-1344-4081-a222-62419ecefc52","Type":"ContainerDied","Data":"0e16ffafafeb8aeaba32f95a4b388b734e4a7e0d7b1d839c6dbebc5e0703fb21"} Nov 24 11:58:38 crc kubenswrapper[4789]: I1124 11:58:38.177104 4789 scope.go:117] "RemoveContainer" containerID="35c18d54a6d963863f1131173b65be0814f48cc37a6950d4c230cb7fa15e65d4" Nov 24 11:58:38 crc kubenswrapper[4789]: E1124 11:58:38.177359 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 11:58:39 crc kubenswrapper[4789]: I1124 11:58:39.192845 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tkb8q" Nov 24 11:58:39 crc kubenswrapper[4789]: I1124 11:58:39.306178 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/75ad7df5-1344-4081-a222-62419ecefc52-ssh-key\") pod \"75ad7df5-1344-4081-a222-62419ecefc52\" (UID: \"75ad7df5-1344-4081-a222-62419ecefc52\") " Nov 24 11:58:39 crc kubenswrapper[4789]: I1124 11:58:39.306299 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szjch\" (UniqueName: \"kubernetes.io/projected/75ad7df5-1344-4081-a222-62419ecefc52-kube-api-access-szjch\") pod \"75ad7df5-1344-4081-a222-62419ecefc52\" (UID: \"75ad7df5-1344-4081-a222-62419ecefc52\") " Nov 24 11:58:39 crc kubenswrapper[4789]: I1124 11:58:39.306471 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75ad7df5-1344-4081-a222-62419ecefc52-inventory\") pod \"75ad7df5-1344-4081-a222-62419ecefc52\" (UID: \"75ad7df5-1344-4081-a222-62419ecefc52\") " Nov 24 11:58:39 crc kubenswrapper[4789]: I1124 11:58:39.313139 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75ad7df5-1344-4081-a222-62419ecefc52-kube-api-access-szjch" (OuterVolumeSpecName: "kube-api-access-szjch") pod "75ad7df5-1344-4081-a222-62419ecefc52" (UID: "75ad7df5-1344-4081-a222-62419ecefc52"). InnerVolumeSpecName "kube-api-access-szjch". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:58:39 crc kubenswrapper[4789]: I1124 11:58:39.338751 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75ad7df5-1344-4081-a222-62419ecefc52-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "75ad7df5-1344-4081-a222-62419ecefc52" (UID: "75ad7df5-1344-4081-a222-62419ecefc52"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:58:39 crc kubenswrapper[4789]: I1124 11:58:39.342168 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75ad7df5-1344-4081-a222-62419ecefc52-inventory" (OuterVolumeSpecName: "inventory") pod "75ad7df5-1344-4081-a222-62419ecefc52" (UID: "75ad7df5-1344-4081-a222-62419ecefc52"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:58:39 crc kubenswrapper[4789]: I1124 11:58:39.411233 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szjch\" (UniqueName: \"kubernetes.io/projected/75ad7df5-1344-4081-a222-62419ecefc52-kube-api-access-szjch\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:39 crc kubenswrapper[4789]: I1124 11:58:39.411274 4789 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75ad7df5-1344-4081-a222-62419ecefc52-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:39 crc kubenswrapper[4789]: I1124 11:58:39.411286 4789 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/75ad7df5-1344-4081-a222-62419ecefc52-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:39 crc kubenswrapper[4789]: I1124 11:58:39.817081 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tkb8q" event={"ID":"75ad7df5-1344-4081-a222-62419ecefc52","Type":"ContainerDied","Data":"de3d5208c027a796a3b85c8e0f89c4ce207e9cdf67c83514aad99f7ef0db88eb"} Nov 24 11:58:39 crc kubenswrapper[4789]: I1124 11:58:39.817545 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de3d5208c027a796a3b85c8e0f89c4ce207e9cdf67c83514aad99f7ef0db88eb" Nov 24 11:58:39 crc kubenswrapper[4789]: I1124 11:58:39.817181 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tkb8q" Nov 24 11:58:39 crc kubenswrapper[4789]: I1124 11:58:39.896164 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cgfs7"] Nov 24 11:58:39 crc kubenswrapper[4789]: E1124 11:58:39.896661 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75ad7df5-1344-4081-a222-62419ecefc52" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:58:39 crc kubenswrapper[4789]: I1124 11:58:39.896686 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="75ad7df5-1344-4081-a222-62419ecefc52" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:58:39 crc kubenswrapper[4789]: I1124 11:58:39.896904 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="75ad7df5-1344-4081-a222-62419ecefc52" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:58:39 crc kubenswrapper[4789]: I1124 11:58:39.897648 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cgfs7" Nov 24 11:58:39 crc kubenswrapper[4789]: I1124 11:58:39.901139 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:58:39 crc kubenswrapper[4789]: I1124 11:58:39.901536 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:58:39 crc kubenswrapper[4789]: I1124 11:58:39.901561 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:58:39 crc kubenswrapper[4789]: I1124 11:58:39.902672 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lhfjg" Nov 24 11:58:39 crc kubenswrapper[4789]: I1124 11:58:39.905589 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cgfs7"] Nov 24 11:58:39 crc kubenswrapper[4789]: I1124 11:58:39.919124 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vktp4\" (UniqueName: \"kubernetes.io/projected/96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e-kube-api-access-vktp4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-cgfs7\" (UID: \"96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cgfs7" Nov 24 11:58:39 crc kubenswrapper[4789]: I1124 11:58:39.919197 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-cgfs7\" (UID: \"96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cgfs7" Nov 24 11:58:39 crc kubenswrapper[4789]: I1124 11:58:39.919294 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-cgfs7\" (UID: \"96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cgfs7" Nov 24 11:58:40 crc kubenswrapper[4789]: I1124 11:58:40.022959 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-cgfs7\" (UID: \"96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cgfs7" Nov 24 11:58:40 crc kubenswrapper[4789]: I1124 11:58:40.023089 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-cgfs7\" (UID: \"96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cgfs7" Nov 24 11:58:40 crc kubenswrapper[4789]: I1124 11:58:40.023194 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vktp4\" (UniqueName: \"kubernetes.io/projected/96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e-kube-api-access-vktp4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-cgfs7\" (UID: \"96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cgfs7" Nov 24 11:58:40 crc kubenswrapper[4789]: I1124 11:58:40.029920 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-cgfs7\" (UID: \"96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cgfs7" Nov 24 11:58:40 crc kubenswrapper[4789]: I1124 11:58:40.030773 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-cgfs7\" (UID: \"96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cgfs7" Nov 24 11:58:40 crc kubenswrapper[4789]: I1124 11:58:40.040505 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vktp4\" (UniqueName: \"kubernetes.io/projected/96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e-kube-api-access-vktp4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-cgfs7\" (UID: \"96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cgfs7" Nov 24 11:58:40 crc kubenswrapper[4789]: I1124 11:58:40.226397 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cgfs7" Nov 24 11:58:40 crc kubenswrapper[4789]: I1124 11:58:40.804567 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cgfs7"] Nov 24 11:58:40 crc kubenswrapper[4789]: I1124 11:58:40.830880 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cgfs7" event={"ID":"96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e","Type":"ContainerStarted","Data":"0d6820bb6713033331737dcbf09d0d11d66a2e691e37ad5a30b26e5ab4ec9f9c"} Nov 24 11:58:42 crc kubenswrapper[4789]: I1124 11:58:42.869985 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cgfs7" event={"ID":"96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e","Type":"ContainerStarted","Data":"0b04fda19995bf99bd14228beb1eaee856a42352a6df4f62a01fffca929d62ea"} Nov 24 11:58:42 crc kubenswrapper[4789]: I1124 11:58:42.894994 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cgfs7" podStartSLOduration=2.876987518 podStartE2EDuration="3.894976558s" podCreationTimestamp="2025-11-24 11:58:39 +0000 UTC" firstStartedPulling="2025-11-24 11:58:40.809386062 +0000 UTC m=+1703.391857431" lastFinishedPulling="2025-11-24 11:58:41.827375092 +0000 UTC m=+1704.409846471" observedRunningTime="2025-11-24 11:58:42.894166678 +0000 UTC m=+1705.476638057" watchObservedRunningTime="2025-11-24 11:58:42.894976558 +0000 UTC m=+1705.477447937" Nov 24 11:58:43 crc kubenswrapper[4789]: I1124 11:58:43.055050 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-b9k6d"] Nov 24 11:58:43 crc kubenswrapper[4789]: I1124 11:58:43.063768 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-b9k6d"] Nov 24 11:58:44 crc kubenswrapper[4789]: I1124 11:58:44.180637 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a66c1b99-9164-4ade-a853-5696e0f21764" path="/var/lib/kubelet/pods/a66c1b99-9164-4ade-a853-5696e0f21764/volumes" Nov 24 11:58:47 crc kubenswrapper[4789]: I1124 11:58:47.036145 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-vvgpt"] Nov 24 11:58:47 crc kubenswrapper[4789]: I1124 11:58:47.046282 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-vvgpt"] Nov 24 11:58:48 crc kubenswrapper[4789]: I1124 11:58:48.203913 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8ed866d-2fd1-4ad5-8cf0-6d8655144679" path="/var/lib/kubelet/pods/d8ed866d-2fd1-4ad5-8cf0-6d8655144679/volumes" Nov 24 11:58:49 crc kubenswrapper[4789]: I1124 11:58:49.170111 4789 scope.go:117] "RemoveContainer" containerID="35c18d54a6d963863f1131173b65be0814f48cc37a6950d4c230cb7fa15e65d4" Nov 24 11:58:49 crc kubenswrapper[4789]: E1124 11:58:49.170534 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 11:58:55 crc kubenswrapper[4789]: I1124 11:58:55.981155 4789 generic.go:334] "Generic (PLEG): container finished" podID="96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e" containerID="0b04fda19995bf99bd14228beb1eaee856a42352a6df4f62a01fffca929d62ea" exitCode=0 Nov 24 11:58:55 crc kubenswrapper[4789]: I1124 11:58:55.981269 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cgfs7" event={"ID":"96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e","Type":"ContainerDied","Data":"0b04fda19995bf99bd14228beb1eaee856a42352a6df4f62a01fffca929d62ea"} Nov 24 11:58:58 crc kubenswrapper[4789]: I1124 11:58:58.854070 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cgfs7" Nov 24 11:58:58 crc kubenswrapper[4789]: I1124 11:58:58.990549 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e-ssh-key\") pod \"96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e\" (UID: \"96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e\") " Nov 24 11:58:58 crc kubenswrapper[4789]: I1124 11:58:58.990814 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vktp4\" (UniqueName: \"kubernetes.io/projected/96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e-kube-api-access-vktp4\") pod \"96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e\" (UID: \"96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e\") " Nov 24 11:58:58 crc kubenswrapper[4789]: I1124 11:58:58.991072 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e-inventory\") pod \"96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e\" (UID: \"96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e\") " Nov 24 11:58:58 crc kubenswrapper[4789]: I1124 11:58:58.996201 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e-kube-api-access-vktp4" (OuterVolumeSpecName: "kube-api-access-vktp4") pod "96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e" (UID: "96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e"). InnerVolumeSpecName "kube-api-access-vktp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:58:59 crc kubenswrapper[4789]: I1124 11:58:59.018168 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cgfs7" event={"ID":"96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e","Type":"ContainerDied","Data":"0d6820bb6713033331737dcbf09d0d11d66a2e691e37ad5a30b26e5ab4ec9f9c"} Nov 24 11:58:59 crc kubenswrapper[4789]: I1124 11:58:59.018225 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d6820bb6713033331737dcbf09d0d11d66a2e691e37ad5a30b26e5ab4ec9f9c" Nov 24 11:58:59 crc kubenswrapper[4789]: I1124 11:58:59.018278 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cgfs7" Nov 24 11:58:59 crc kubenswrapper[4789]: I1124 11:58:59.022165 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e-inventory" (OuterVolumeSpecName: "inventory") pod "96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e" (UID: "96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:58:59 crc kubenswrapper[4789]: I1124 11:58:59.023364 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e" (UID: "96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:58:59 crc kubenswrapper[4789]: I1124 11:58:59.093039 4789 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:59 crc kubenswrapper[4789]: I1124 11:58:59.093074 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vktp4\" (UniqueName: \"kubernetes.io/projected/96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e-kube-api-access-vktp4\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:59 crc kubenswrapper[4789]: I1124 11:58:59.093088 4789 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:00 crc kubenswrapper[4789]: I1124 11:59:00.170863 4789 scope.go:117] "RemoveContainer" containerID="35c18d54a6d963863f1131173b65be0814f48cc37a6950d4c230cb7fa15e65d4" Nov 24 11:59:00 crc kubenswrapper[4789]: E1124 11:59:00.171134 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 11:59:09 crc kubenswrapper[4789]: I1124 11:59:09.083207 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hpss8"] Nov 24 11:59:09 crc kubenswrapper[4789]: E1124 11:59:09.084589 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:59:09 crc kubenswrapper[4789]: I1124 11:59:09.084612 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:59:09 crc kubenswrapper[4789]: I1124 11:59:09.084836 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:59:09 crc kubenswrapper[4789]: I1124 11:59:09.086262 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hpss8" Nov 24 11:59:09 crc kubenswrapper[4789]: I1124 11:59:09.108755 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hpss8"] Nov 24 11:59:09 crc kubenswrapper[4789]: I1124 11:59:09.201551 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c521f465-28e0-484b-9a2c-7b7fd1b5f9a1-utilities\") pod \"certified-operators-hpss8\" (UID: \"c521f465-28e0-484b-9a2c-7b7fd1b5f9a1\") " pod="openshift-marketplace/certified-operators-hpss8" Nov 24 11:59:09 crc kubenswrapper[4789]: I1124 11:59:09.201692 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c521f465-28e0-484b-9a2c-7b7fd1b5f9a1-catalog-content\") pod \"certified-operators-hpss8\" (UID: \"c521f465-28e0-484b-9a2c-7b7fd1b5f9a1\") " pod="openshift-marketplace/certified-operators-hpss8" Nov 24 11:59:09 crc kubenswrapper[4789]: I1124 11:59:09.201726 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzvd7\" (UniqueName: \"kubernetes.io/projected/c521f465-28e0-484b-9a2c-7b7fd1b5f9a1-kube-api-access-pzvd7\") pod \"certified-operators-hpss8\" (UID: \"c521f465-28e0-484b-9a2c-7b7fd1b5f9a1\") " pod="openshift-marketplace/certified-operators-hpss8" Nov 24 11:59:09 crc kubenswrapper[4789]: I1124 11:59:09.271511 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7djsn"] Nov 24 11:59:09 crc kubenswrapper[4789]: I1124 11:59:09.276607 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7djsn" Nov 24 11:59:09 crc kubenswrapper[4789]: I1124 11:59:09.301563 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7djsn"] Nov 24 11:59:09 crc kubenswrapper[4789]: I1124 11:59:09.306881 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c521f465-28e0-484b-9a2c-7b7fd1b5f9a1-catalog-content\") pod \"certified-operators-hpss8\" (UID: \"c521f465-28e0-484b-9a2c-7b7fd1b5f9a1\") " pod="openshift-marketplace/certified-operators-hpss8" Nov 24 11:59:09 crc kubenswrapper[4789]: I1124 11:59:09.306946 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzvd7\" (UniqueName: \"kubernetes.io/projected/c521f465-28e0-484b-9a2c-7b7fd1b5f9a1-kube-api-access-pzvd7\") pod \"certified-operators-hpss8\" (UID: \"c521f465-28e0-484b-9a2c-7b7fd1b5f9a1\") " pod="openshift-marketplace/certified-operators-hpss8" Nov 24 11:59:09 crc kubenswrapper[4789]: I1124 11:59:09.306997 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac68af69-d96c-473e-81dd-fae277ed2a11-catalog-content\") pod \"community-operators-7djsn\" (UID: \"ac68af69-d96c-473e-81dd-fae277ed2a11\") " pod="openshift-marketplace/community-operators-7djsn" Nov 24 11:59:09 crc kubenswrapper[4789]: I1124 11:59:09.309185 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac68af69-d96c-473e-81dd-fae277ed2a11-utilities\") pod \"community-operators-7djsn\" (UID: \"ac68af69-d96c-473e-81dd-fae277ed2a11\") " pod="openshift-marketplace/community-operators-7djsn" Nov 24 11:59:09 crc kubenswrapper[4789]: I1124 11:59:09.309242 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c521f465-28e0-484b-9a2c-7b7fd1b5f9a1-utilities\") pod \"certified-operators-hpss8\" (UID: \"c521f465-28e0-484b-9a2c-7b7fd1b5f9a1\") " pod="openshift-marketplace/certified-operators-hpss8" Nov 24 11:59:09 crc kubenswrapper[4789]: I1124 11:59:09.309278 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp7kp\" (UniqueName: \"kubernetes.io/projected/ac68af69-d96c-473e-81dd-fae277ed2a11-kube-api-access-lp7kp\") pod \"community-operators-7djsn\" (UID: \"ac68af69-d96c-473e-81dd-fae277ed2a11\") " pod="openshift-marketplace/community-operators-7djsn" Nov 24 11:59:09 crc kubenswrapper[4789]: I1124 11:59:09.314925 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c521f465-28e0-484b-9a2c-7b7fd1b5f9a1-utilities\") pod \"certified-operators-hpss8\" (UID: \"c521f465-28e0-484b-9a2c-7b7fd1b5f9a1\") " pod="openshift-marketplace/certified-operators-hpss8" Nov 24 11:59:09 crc kubenswrapper[4789]: I1124 11:59:09.335298 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c521f465-28e0-484b-9a2c-7b7fd1b5f9a1-catalog-content\") pod \"certified-operators-hpss8\" (UID: \"c521f465-28e0-484b-9a2c-7b7fd1b5f9a1\") " pod="openshift-marketplace/certified-operators-hpss8" Nov 24 11:59:09 crc kubenswrapper[4789]: I1124 11:59:09.367493 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzvd7\" (UniqueName: \"kubernetes.io/projected/c521f465-28e0-484b-9a2c-7b7fd1b5f9a1-kube-api-access-pzvd7\") pod \"certified-operators-hpss8\" (UID: \"c521f465-28e0-484b-9a2c-7b7fd1b5f9a1\") " pod="openshift-marketplace/certified-operators-hpss8" Nov 24 11:59:09 crc kubenswrapper[4789]: I1124 11:59:09.411407 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac68af69-d96c-473e-81dd-fae277ed2a11-utilities\") pod \"community-operators-7djsn\" (UID: \"ac68af69-d96c-473e-81dd-fae277ed2a11\") " pod="openshift-marketplace/community-operators-7djsn" Nov 24 11:59:09 crc kubenswrapper[4789]: I1124 11:59:09.411875 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp7kp\" (UniqueName: \"kubernetes.io/projected/ac68af69-d96c-473e-81dd-fae277ed2a11-kube-api-access-lp7kp\") pod \"community-operators-7djsn\" (UID: \"ac68af69-d96c-473e-81dd-fae277ed2a11\") " pod="openshift-marketplace/community-operators-7djsn" Nov 24 11:59:09 crc kubenswrapper[4789]: I1124 11:59:09.412056 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac68af69-d96c-473e-81dd-fae277ed2a11-catalog-content\") pod \"community-operators-7djsn\" (UID: \"ac68af69-d96c-473e-81dd-fae277ed2a11\") " pod="openshift-marketplace/community-operators-7djsn" Nov 24 11:59:09 crc kubenswrapper[4789]: I1124 11:59:09.412369 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac68af69-d96c-473e-81dd-fae277ed2a11-utilities\") pod \"community-operators-7djsn\" (UID: \"ac68af69-d96c-473e-81dd-fae277ed2a11\") " pod="openshift-marketplace/community-operators-7djsn" Nov 24 11:59:09 crc kubenswrapper[4789]: I1124 11:59:09.412529 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac68af69-d96c-473e-81dd-fae277ed2a11-catalog-content\") pod \"community-operators-7djsn\" (UID: \"ac68af69-d96c-473e-81dd-fae277ed2a11\") " pod="openshift-marketplace/community-operators-7djsn" Nov 24 11:59:09 crc kubenswrapper[4789]: I1124 11:59:09.421195 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hpss8" Nov 24 11:59:09 crc kubenswrapper[4789]: I1124 11:59:09.458286 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp7kp\" (UniqueName: \"kubernetes.io/projected/ac68af69-d96c-473e-81dd-fae277ed2a11-kube-api-access-lp7kp\") pod \"community-operators-7djsn\" (UID: \"ac68af69-d96c-473e-81dd-fae277ed2a11\") " pod="openshift-marketplace/community-operators-7djsn" Nov 24 11:59:09 crc kubenswrapper[4789]: I1124 11:59:09.596135 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7djsn" Nov 24 11:59:10 crc kubenswrapper[4789]: I1124 11:59:10.134285 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hpss8"] Nov 24 11:59:10 crc kubenswrapper[4789]: I1124 11:59:10.327111 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7djsn"] Nov 24 11:59:11 crc kubenswrapper[4789]: I1124 11:59:11.115415 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hpss8" event={"ID":"c521f465-28e0-484b-9a2c-7b7fd1b5f9a1","Type":"ContainerDied","Data":"fc6ac43d85f7800c255a478e13c4f0b2c00e3fa9760d370fe6ee794047b72716"} Nov 24 11:59:11 crc kubenswrapper[4789]: I1124 11:59:11.115285 4789 generic.go:334] "Generic (PLEG): container finished" podID="c521f465-28e0-484b-9a2c-7b7fd1b5f9a1" containerID="fc6ac43d85f7800c255a478e13c4f0b2c00e3fa9760d370fe6ee794047b72716" exitCode=0 Nov 24 11:59:11 crc kubenswrapper[4789]: I1124 11:59:11.116505 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hpss8" event={"ID":"c521f465-28e0-484b-9a2c-7b7fd1b5f9a1","Type":"ContainerStarted","Data":"bcc19162b7de8b81fa0f1174a5b4bf005f06fb70f62214e8cc4c10b0e8cde535"} Nov 24 11:59:11 crc kubenswrapper[4789]: I1124 11:59:11.120106 4789 generic.go:334] "Generic (PLEG): container finished" podID="ac68af69-d96c-473e-81dd-fae277ed2a11" containerID="cef223aafb5c6eca940f716fe3f9f5f09abd3b7b1bf61d035263790432298412" exitCode=0 Nov 24 11:59:11 crc kubenswrapper[4789]: I1124 11:59:11.120181 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7djsn" event={"ID":"ac68af69-d96c-473e-81dd-fae277ed2a11","Type":"ContainerDied","Data":"cef223aafb5c6eca940f716fe3f9f5f09abd3b7b1bf61d035263790432298412"} Nov 24 11:59:11 crc kubenswrapper[4789]: I1124 11:59:11.120441 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7djsn" event={"ID":"ac68af69-d96c-473e-81dd-fae277ed2a11","Type":"ContainerStarted","Data":"93640701ee7a758b319864273519323acb80090854259d649dd672b10a7aff2b"} Nov 24 11:59:13 crc kubenswrapper[4789]: I1124 11:59:13.136798 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hpss8" event={"ID":"c521f465-28e0-484b-9a2c-7b7fd1b5f9a1","Type":"ContainerStarted","Data":"3ae74eff86bb2793fda3df82252ce41d0583f6eaccc8b7d0e802be4d9be45627"} Nov 24 11:59:13 crc kubenswrapper[4789]: I1124 11:59:13.143614 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7djsn" event={"ID":"ac68af69-d96c-473e-81dd-fae277ed2a11","Type":"ContainerStarted","Data":"96860cf6a002cedfd1c5f456f42b152bd951d68808122a37b4801f7d112e5bf2"} Nov 24 11:59:14 crc kubenswrapper[4789]: I1124 11:59:14.169870 4789 scope.go:117] "RemoveContainer" containerID="35c18d54a6d963863f1131173b65be0814f48cc37a6950d4c230cb7fa15e65d4" Nov 24 11:59:14 crc kubenswrapper[4789]: E1124 11:59:14.170658 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 11:59:18 crc kubenswrapper[4789]: I1124 11:59:18.187833 4789 generic.go:334] "Generic (PLEG): container finished" podID="c521f465-28e0-484b-9a2c-7b7fd1b5f9a1" containerID="3ae74eff86bb2793fda3df82252ce41d0583f6eaccc8b7d0e802be4d9be45627" exitCode=0 Nov 24 11:59:18 crc kubenswrapper[4789]: I1124 11:59:18.188136 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hpss8" event={"ID":"c521f465-28e0-484b-9a2c-7b7fd1b5f9a1","Type":"ContainerDied","Data":"3ae74eff86bb2793fda3df82252ce41d0583f6eaccc8b7d0e802be4d9be45627"} Nov 24 11:59:18 crc kubenswrapper[4789]: I1124 11:59:18.194501 4789 generic.go:334] "Generic (PLEG): container finished" podID="ac68af69-d96c-473e-81dd-fae277ed2a11" containerID="96860cf6a002cedfd1c5f456f42b152bd951d68808122a37b4801f7d112e5bf2" exitCode=0 Nov 24 11:59:18 crc kubenswrapper[4789]: I1124 11:59:18.194822 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7djsn" event={"ID":"ac68af69-d96c-473e-81dd-fae277ed2a11","Type":"ContainerDied","Data":"96860cf6a002cedfd1c5f456f42b152bd951d68808122a37b4801f7d112e5bf2"} Nov 24 11:59:20 crc kubenswrapper[4789]: I1124 11:59:20.213992 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hpss8" event={"ID":"c521f465-28e0-484b-9a2c-7b7fd1b5f9a1","Type":"ContainerStarted","Data":"fc7c172717a12ad8d54c7a1ef94a8eda2f0e4b5a824a2f262d76daa093ba15b8"} Nov 24 11:59:20 crc kubenswrapper[4789]: I1124 11:59:20.218945 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7djsn" event={"ID":"ac68af69-d96c-473e-81dd-fae277ed2a11","Type":"ContainerStarted","Data":"3410830a74ee94f6690a62452e0f6d8bc005409c73827061a809a5c98c66b501"} Nov 24 11:59:20 crc kubenswrapper[4789]: I1124 11:59:20.235261 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hpss8" podStartSLOduration=2.878480262 podStartE2EDuration="11.235237637s" podCreationTimestamp="2025-11-24 11:59:09 +0000 UTC" firstStartedPulling="2025-11-24 11:59:11.117555876 +0000 UTC m=+1733.700027255" lastFinishedPulling="2025-11-24 11:59:19.474313241 +0000 UTC m=+1742.056784630" observedRunningTime="2025-11-24 11:59:20.232105923 +0000 UTC m=+1742.814577312" watchObservedRunningTime="2025-11-24 11:59:20.235237637 +0000 UTC m=+1742.817709016" Nov 24 11:59:20 crc kubenswrapper[4789]: I1124 11:59:20.254690 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7djsn" podStartSLOduration=3.198041149 podStartE2EDuration="11.254672202s" podCreationTimestamp="2025-11-24 11:59:09 +0000 UTC" firstStartedPulling="2025-11-24 11:59:11.122004872 +0000 UTC m=+1733.704476251" lastFinishedPulling="2025-11-24 11:59:19.178635925 +0000 UTC m=+1741.761107304" observedRunningTime="2025-11-24 11:59:20.253811661 +0000 UTC m=+1742.836283050" watchObservedRunningTime="2025-11-24 11:59:20.254672202 +0000 UTC m=+1742.837143581" Nov 24 11:59:28 crc kubenswrapper[4789]: I1124 11:59:28.173864 4789 scope.go:117] "RemoveContainer" containerID="35c18d54a6d963863f1131173b65be0814f48cc37a6950d4c230cb7fa15e65d4" Nov 24 11:59:28 crc kubenswrapper[4789]: E1124 11:59:28.174569 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 11:59:29 crc kubenswrapper[4789]: I1124 11:59:29.041407 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-pqz4s"] Nov 24 11:59:29 crc kubenswrapper[4789]: I1124 11:59:29.049470 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-pqz4s"] Nov 24 11:59:29 crc kubenswrapper[4789]: I1124 11:59:29.422112 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hpss8" Nov 24 11:59:29 crc kubenswrapper[4789]: I1124 11:59:29.422479 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hpss8" Nov 24 11:59:29 crc kubenswrapper[4789]: I1124 11:59:29.597517 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7djsn" Nov 24 11:59:29 crc kubenswrapper[4789]: I1124 11:59:29.597562 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7djsn" Nov 24 11:59:29 crc kubenswrapper[4789]: I1124 11:59:29.640242 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7djsn" Nov 24 11:59:30 crc kubenswrapper[4789]: I1124 11:59:30.179905 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="419ba329-785c-4647-b1c9-cb366aaaea48" path="/var/lib/kubelet/pods/419ba329-785c-4647-b1c9-cb366aaaea48/volumes" Nov 24 11:59:30 crc kubenswrapper[4789]: I1124 11:59:30.369108 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7djsn" Nov 24 11:59:30 crc kubenswrapper[4789]: I1124 11:59:30.415185 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7djsn"] Nov 24 11:59:30 crc kubenswrapper[4789]: I1124 11:59:30.467778 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-hpss8" podUID="c521f465-28e0-484b-9a2c-7b7fd1b5f9a1" containerName="registry-server" probeResult="failure" output=< Nov 24 11:59:30 crc kubenswrapper[4789]: timeout: failed to connect service ":50051" within 1s Nov 24 11:59:30 crc kubenswrapper[4789]: > Nov 24 11:59:32 crc kubenswrapper[4789]: I1124 11:59:32.341783 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7djsn" podUID="ac68af69-d96c-473e-81dd-fae277ed2a11" containerName="registry-server" containerID="cri-o://3410830a74ee94f6690a62452e0f6d8bc005409c73827061a809a5c98c66b501" gracePeriod=2 Nov 24 11:59:32 crc kubenswrapper[4789]: I1124 11:59:32.748204 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7djsn" Nov 24 11:59:32 crc kubenswrapper[4789]: I1124 11:59:32.818978 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac68af69-d96c-473e-81dd-fae277ed2a11-catalog-content\") pod \"ac68af69-d96c-473e-81dd-fae277ed2a11\" (UID: \"ac68af69-d96c-473e-81dd-fae277ed2a11\") " Nov 24 11:59:32 crc kubenswrapper[4789]: I1124 11:59:32.819264 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac68af69-d96c-473e-81dd-fae277ed2a11-utilities\") pod \"ac68af69-d96c-473e-81dd-fae277ed2a11\" (UID: \"ac68af69-d96c-473e-81dd-fae277ed2a11\") " Nov 24 11:59:32 crc kubenswrapper[4789]: I1124 11:59:32.819398 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lp7kp\" (UniqueName: \"kubernetes.io/projected/ac68af69-d96c-473e-81dd-fae277ed2a11-kube-api-access-lp7kp\") pod \"ac68af69-d96c-473e-81dd-fae277ed2a11\" (UID: \"ac68af69-d96c-473e-81dd-fae277ed2a11\") " Nov 24 11:59:32 crc kubenswrapper[4789]: I1124 11:59:32.820198 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac68af69-d96c-473e-81dd-fae277ed2a11-utilities" (OuterVolumeSpecName: "utilities") pod "ac68af69-d96c-473e-81dd-fae277ed2a11" (UID: "ac68af69-d96c-473e-81dd-fae277ed2a11"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:59:32 crc kubenswrapper[4789]: I1124 11:59:32.828643 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac68af69-d96c-473e-81dd-fae277ed2a11-kube-api-access-lp7kp" (OuterVolumeSpecName: "kube-api-access-lp7kp") pod "ac68af69-d96c-473e-81dd-fae277ed2a11" (UID: "ac68af69-d96c-473e-81dd-fae277ed2a11"). InnerVolumeSpecName "kube-api-access-lp7kp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:32 crc kubenswrapper[4789]: I1124 11:59:32.870614 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac68af69-d96c-473e-81dd-fae277ed2a11-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac68af69-d96c-473e-81dd-fae277ed2a11" (UID: "ac68af69-d96c-473e-81dd-fae277ed2a11"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:59:32 crc kubenswrapper[4789]: I1124 11:59:32.921605 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac68af69-d96c-473e-81dd-fae277ed2a11-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:32 crc kubenswrapper[4789]: I1124 11:59:32.921638 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac68af69-d96c-473e-81dd-fae277ed2a11-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:32 crc kubenswrapper[4789]: I1124 11:59:32.921648 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lp7kp\" (UniqueName: \"kubernetes.io/projected/ac68af69-d96c-473e-81dd-fae277ed2a11-kube-api-access-lp7kp\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:33 crc kubenswrapper[4789]: I1124 11:59:33.356688 4789 generic.go:334] "Generic (PLEG): container finished" podID="ac68af69-d96c-473e-81dd-fae277ed2a11" containerID="3410830a74ee94f6690a62452e0f6d8bc005409c73827061a809a5c98c66b501" exitCode=0 Nov 24 11:59:33 crc kubenswrapper[4789]: I1124 11:59:33.357038 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7djsn" event={"ID":"ac68af69-d96c-473e-81dd-fae277ed2a11","Type":"ContainerDied","Data":"3410830a74ee94f6690a62452e0f6d8bc005409c73827061a809a5c98c66b501"} Nov 24 11:59:33 crc kubenswrapper[4789]: I1124 11:59:33.357094 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7djsn" event={"ID":"ac68af69-d96c-473e-81dd-fae277ed2a11","Type":"ContainerDied","Data":"93640701ee7a758b319864273519323acb80090854259d649dd672b10a7aff2b"} Nov 24 11:59:33 crc kubenswrapper[4789]: I1124 11:59:33.357120 4789 scope.go:117] "RemoveContainer" containerID="3410830a74ee94f6690a62452e0f6d8bc005409c73827061a809a5c98c66b501" Nov 24 11:59:33 crc kubenswrapper[4789]: I1124 11:59:33.357321 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7djsn" Nov 24 11:59:33 crc kubenswrapper[4789]: I1124 11:59:33.388734 4789 scope.go:117] "RemoveContainer" containerID="96860cf6a002cedfd1c5f456f42b152bd951d68808122a37b4801f7d112e5bf2" Nov 24 11:59:33 crc kubenswrapper[4789]: I1124 11:59:33.407298 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7djsn"] Nov 24 11:59:33 crc kubenswrapper[4789]: I1124 11:59:33.415886 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7djsn"] Nov 24 11:59:33 crc kubenswrapper[4789]: I1124 11:59:33.420258 4789 scope.go:117] "RemoveContainer" containerID="cef223aafb5c6eca940f716fe3f9f5f09abd3b7b1bf61d035263790432298412" Nov 24 11:59:33 crc kubenswrapper[4789]: I1124 11:59:33.446931 4789 scope.go:117] "RemoveContainer" containerID="3410830a74ee94f6690a62452e0f6d8bc005409c73827061a809a5c98c66b501" Nov 24 11:59:33 crc kubenswrapper[4789]: E1124 11:59:33.447563 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3410830a74ee94f6690a62452e0f6d8bc005409c73827061a809a5c98c66b501\": container with ID starting with 3410830a74ee94f6690a62452e0f6d8bc005409c73827061a809a5c98c66b501 not found: ID does not exist" containerID="3410830a74ee94f6690a62452e0f6d8bc005409c73827061a809a5c98c66b501" Nov 24 11:59:33 crc kubenswrapper[4789]: I1124 11:59:33.447607 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3410830a74ee94f6690a62452e0f6d8bc005409c73827061a809a5c98c66b501"} err="failed to get container status \"3410830a74ee94f6690a62452e0f6d8bc005409c73827061a809a5c98c66b501\": rpc error: code = NotFound desc = could not find container \"3410830a74ee94f6690a62452e0f6d8bc005409c73827061a809a5c98c66b501\": container with ID starting with 3410830a74ee94f6690a62452e0f6d8bc005409c73827061a809a5c98c66b501 not found: ID does not exist" Nov 24 11:59:33 crc kubenswrapper[4789]: I1124 11:59:33.447632 4789 scope.go:117] "RemoveContainer" containerID="96860cf6a002cedfd1c5f456f42b152bd951d68808122a37b4801f7d112e5bf2" Nov 24 11:59:33 crc kubenswrapper[4789]: E1124 11:59:33.448026 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96860cf6a002cedfd1c5f456f42b152bd951d68808122a37b4801f7d112e5bf2\": container with ID starting with 96860cf6a002cedfd1c5f456f42b152bd951d68808122a37b4801f7d112e5bf2 not found: ID does not exist" containerID="96860cf6a002cedfd1c5f456f42b152bd951d68808122a37b4801f7d112e5bf2" Nov 24 11:59:33 crc kubenswrapper[4789]: I1124 11:59:33.448046 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96860cf6a002cedfd1c5f456f42b152bd951d68808122a37b4801f7d112e5bf2"} err="failed to get container status \"96860cf6a002cedfd1c5f456f42b152bd951d68808122a37b4801f7d112e5bf2\": rpc error: code = NotFound desc = could not find container \"96860cf6a002cedfd1c5f456f42b152bd951d68808122a37b4801f7d112e5bf2\": container with ID starting with 96860cf6a002cedfd1c5f456f42b152bd951d68808122a37b4801f7d112e5bf2 not found: ID does not exist" Nov 24 11:59:33 crc kubenswrapper[4789]: I1124 11:59:33.448065 4789 scope.go:117] "RemoveContainer" containerID="cef223aafb5c6eca940f716fe3f9f5f09abd3b7b1bf61d035263790432298412" Nov 24 11:59:33 crc kubenswrapper[4789]: E1124 11:59:33.448339 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cef223aafb5c6eca940f716fe3f9f5f09abd3b7b1bf61d035263790432298412\": container with ID starting with cef223aafb5c6eca940f716fe3f9f5f09abd3b7b1bf61d035263790432298412 not found: ID does not exist" containerID="cef223aafb5c6eca940f716fe3f9f5f09abd3b7b1bf61d035263790432298412" Nov 24 11:59:33 crc kubenswrapper[4789]: I1124 11:59:33.448370 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cef223aafb5c6eca940f716fe3f9f5f09abd3b7b1bf61d035263790432298412"} err="failed to get container status \"cef223aafb5c6eca940f716fe3f9f5f09abd3b7b1bf61d035263790432298412\": rpc error: code = NotFound desc = could not find container \"cef223aafb5c6eca940f716fe3f9f5f09abd3b7b1bf61d035263790432298412\": container with ID starting with cef223aafb5c6eca940f716fe3f9f5f09abd3b7b1bf61d035263790432298412 not found: ID does not exist" Nov 24 11:59:33 crc kubenswrapper[4789]: I1124 11:59:33.717626 4789 scope.go:117] "RemoveContainer" containerID="a4951fe682c84783cf01089d61af331b4d66eb9d9a32875c8c255605275094ef" Nov 24 11:59:33 crc kubenswrapper[4789]: I1124 11:59:33.776237 4789 scope.go:117] "RemoveContainer" containerID="ccbcbb0c6e21d1e6f997643b6f091b6f63af003868bb5c44dd222b6a7543d6b5" Nov 24 11:59:33 crc kubenswrapper[4789]: I1124 11:59:33.817233 4789 scope.go:117] "RemoveContainer" containerID="e5a00590bf0e7a113b98e8e5ff242d4ed17147f3562cfb82c01ba559ae88fd96" Nov 24 11:59:34 crc kubenswrapper[4789]: I1124 11:59:34.190680 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac68af69-d96c-473e-81dd-fae277ed2a11" path="/var/lib/kubelet/pods/ac68af69-d96c-473e-81dd-fae277ed2a11/volumes" Nov 24 11:59:39 crc kubenswrapper[4789]: I1124 11:59:39.172299 4789 scope.go:117] "RemoveContainer" containerID="35c18d54a6d963863f1131173b65be0814f48cc37a6950d4c230cb7fa15e65d4" Nov 24 11:59:39 crc kubenswrapper[4789]: E1124 11:59:39.172904 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 11:59:39 crc kubenswrapper[4789]: I1124 11:59:39.470849 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hpss8" Nov 24 11:59:39 crc kubenswrapper[4789]: I1124 11:59:39.525122 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hpss8" Nov 24 11:59:40 crc kubenswrapper[4789]: I1124 11:59:40.281235 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hpss8"] Nov 24 11:59:41 crc kubenswrapper[4789]: I1124 11:59:41.432707 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hpss8" podUID="c521f465-28e0-484b-9a2c-7b7fd1b5f9a1" containerName="registry-server" containerID="cri-o://fc7c172717a12ad8d54c7a1ef94a8eda2f0e4b5a824a2f262d76daa093ba15b8" gracePeriod=2 Nov 24 11:59:41 crc kubenswrapper[4789]: E1124 11:59:41.625250 4789 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc521f465_28e0_484b_9a2c_7b7fd1b5f9a1.slice/crio-fc7c172717a12ad8d54c7a1ef94a8eda2f0e4b5a824a2f262d76daa093ba15b8.scope\": RecentStats: unable to find data in memory cache]" Nov 24 11:59:41 crc kubenswrapper[4789]: I1124 11:59:41.887859 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hpss8" Nov 24 11:59:42 crc kubenswrapper[4789]: I1124 11:59:42.006629 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c521f465-28e0-484b-9a2c-7b7fd1b5f9a1-catalog-content\") pod \"c521f465-28e0-484b-9a2c-7b7fd1b5f9a1\" (UID: \"c521f465-28e0-484b-9a2c-7b7fd1b5f9a1\") " Nov 24 11:59:42 crc kubenswrapper[4789]: I1124 11:59:42.007138 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c521f465-28e0-484b-9a2c-7b7fd1b5f9a1-utilities\") pod \"c521f465-28e0-484b-9a2c-7b7fd1b5f9a1\" (UID: \"c521f465-28e0-484b-9a2c-7b7fd1b5f9a1\") " Nov 24 11:59:42 crc kubenswrapper[4789]: I1124 11:59:42.007329 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzvd7\" (UniqueName: \"kubernetes.io/projected/c521f465-28e0-484b-9a2c-7b7fd1b5f9a1-kube-api-access-pzvd7\") pod \"c521f465-28e0-484b-9a2c-7b7fd1b5f9a1\" (UID: \"c521f465-28e0-484b-9a2c-7b7fd1b5f9a1\") " Nov 24 11:59:42 crc kubenswrapper[4789]: I1124 11:59:42.008427 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c521f465-28e0-484b-9a2c-7b7fd1b5f9a1-utilities" (OuterVolumeSpecName: "utilities") pod "c521f465-28e0-484b-9a2c-7b7fd1b5f9a1" (UID: "c521f465-28e0-484b-9a2c-7b7fd1b5f9a1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:59:42 crc kubenswrapper[4789]: I1124 11:59:42.031698 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c521f465-28e0-484b-9a2c-7b7fd1b5f9a1-kube-api-access-pzvd7" (OuterVolumeSpecName: "kube-api-access-pzvd7") pod "c521f465-28e0-484b-9a2c-7b7fd1b5f9a1" (UID: "c521f465-28e0-484b-9a2c-7b7fd1b5f9a1"). InnerVolumeSpecName "kube-api-access-pzvd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:59:42 crc kubenswrapper[4789]: I1124 11:59:42.058035 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c521f465-28e0-484b-9a2c-7b7fd1b5f9a1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c521f465-28e0-484b-9a2c-7b7fd1b5f9a1" (UID: "c521f465-28e0-484b-9a2c-7b7fd1b5f9a1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:59:42 crc kubenswrapper[4789]: I1124 11:59:42.110786 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c521f465-28e0-484b-9a2c-7b7fd1b5f9a1-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:42 crc kubenswrapper[4789]: I1124 11:59:42.110819 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzvd7\" (UniqueName: \"kubernetes.io/projected/c521f465-28e0-484b-9a2c-7b7fd1b5f9a1-kube-api-access-pzvd7\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:42 crc kubenswrapper[4789]: I1124 11:59:42.110829 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c521f465-28e0-484b-9a2c-7b7fd1b5f9a1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:59:42 crc kubenswrapper[4789]: I1124 11:59:42.441478 4789 generic.go:334] "Generic (PLEG): container finished" podID="c521f465-28e0-484b-9a2c-7b7fd1b5f9a1" containerID="fc7c172717a12ad8d54c7a1ef94a8eda2f0e4b5a824a2f262d76daa093ba15b8" exitCode=0 Nov 24 11:59:42 crc kubenswrapper[4789]: I1124 11:59:42.441518 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hpss8" event={"ID":"c521f465-28e0-484b-9a2c-7b7fd1b5f9a1","Type":"ContainerDied","Data":"fc7c172717a12ad8d54c7a1ef94a8eda2f0e4b5a824a2f262d76daa093ba15b8"} Nov 24 11:59:42 crc kubenswrapper[4789]: I1124 11:59:42.441544 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hpss8" event={"ID":"c521f465-28e0-484b-9a2c-7b7fd1b5f9a1","Type":"ContainerDied","Data":"bcc19162b7de8b81fa0f1174a5b4bf005f06fb70f62214e8cc4c10b0e8cde535"} Nov 24 11:59:42 crc kubenswrapper[4789]: I1124 11:59:42.441561 4789 scope.go:117] "RemoveContainer" containerID="fc7c172717a12ad8d54c7a1ef94a8eda2f0e4b5a824a2f262d76daa093ba15b8" Nov 24 11:59:42 crc kubenswrapper[4789]: I1124 11:59:42.441631 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hpss8" Nov 24 11:59:42 crc kubenswrapper[4789]: I1124 11:59:42.486674 4789 scope.go:117] "RemoveContainer" containerID="3ae74eff86bb2793fda3df82252ce41d0583f6eaccc8b7d0e802be4d9be45627" Nov 24 11:59:42 crc kubenswrapper[4789]: I1124 11:59:42.488649 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hpss8"] Nov 24 11:59:42 crc kubenswrapper[4789]: I1124 11:59:42.500067 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hpss8"] Nov 24 11:59:42 crc kubenswrapper[4789]: I1124 11:59:42.522322 4789 scope.go:117] "RemoveContainer" containerID="fc6ac43d85f7800c255a478e13c4f0b2c00e3fa9760d370fe6ee794047b72716" Nov 24 11:59:42 crc kubenswrapper[4789]: I1124 11:59:42.551276 4789 scope.go:117] "RemoveContainer" containerID="fc7c172717a12ad8d54c7a1ef94a8eda2f0e4b5a824a2f262d76daa093ba15b8" Nov 24 11:59:42 crc kubenswrapper[4789]: E1124 11:59:42.556020 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc7c172717a12ad8d54c7a1ef94a8eda2f0e4b5a824a2f262d76daa093ba15b8\": container with ID starting with fc7c172717a12ad8d54c7a1ef94a8eda2f0e4b5a824a2f262d76daa093ba15b8 not found: ID does not exist" containerID="fc7c172717a12ad8d54c7a1ef94a8eda2f0e4b5a824a2f262d76daa093ba15b8" Nov 24 11:59:42 crc kubenswrapper[4789]: I1124 11:59:42.556092 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc7c172717a12ad8d54c7a1ef94a8eda2f0e4b5a824a2f262d76daa093ba15b8"} err="failed to get container status \"fc7c172717a12ad8d54c7a1ef94a8eda2f0e4b5a824a2f262d76daa093ba15b8\": rpc error: code = NotFound desc = could not find container \"fc7c172717a12ad8d54c7a1ef94a8eda2f0e4b5a824a2f262d76daa093ba15b8\": container with ID starting with fc7c172717a12ad8d54c7a1ef94a8eda2f0e4b5a824a2f262d76daa093ba15b8 not found: ID does not exist" Nov 24 11:59:42 crc kubenswrapper[4789]: I1124 11:59:42.556124 4789 scope.go:117] "RemoveContainer" containerID="3ae74eff86bb2793fda3df82252ce41d0583f6eaccc8b7d0e802be4d9be45627" Nov 24 11:59:42 crc kubenswrapper[4789]: E1124 11:59:42.556768 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ae74eff86bb2793fda3df82252ce41d0583f6eaccc8b7d0e802be4d9be45627\": container with ID starting with 3ae74eff86bb2793fda3df82252ce41d0583f6eaccc8b7d0e802be4d9be45627 not found: ID does not exist" containerID="3ae74eff86bb2793fda3df82252ce41d0583f6eaccc8b7d0e802be4d9be45627" Nov 24 11:59:42 crc kubenswrapper[4789]: I1124 11:59:42.556797 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ae74eff86bb2793fda3df82252ce41d0583f6eaccc8b7d0e802be4d9be45627"} err="failed to get container status \"3ae74eff86bb2793fda3df82252ce41d0583f6eaccc8b7d0e802be4d9be45627\": rpc error: code = NotFound desc = could not find container \"3ae74eff86bb2793fda3df82252ce41d0583f6eaccc8b7d0e802be4d9be45627\": container with ID starting with 3ae74eff86bb2793fda3df82252ce41d0583f6eaccc8b7d0e802be4d9be45627 not found: ID does not exist" Nov 24 11:59:42 crc kubenswrapper[4789]: I1124 11:59:42.556814 4789 scope.go:117] "RemoveContainer" containerID="fc6ac43d85f7800c255a478e13c4f0b2c00e3fa9760d370fe6ee794047b72716" Nov 24 11:59:42 crc kubenswrapper[4789]: E1124 11:59:42.557263 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc6ac43d85f7800c255a478e13c4f0b2c00e3fa9760d370fe6ee794047b72716\": container with ID starting with fc6ac43d85f7800c255a478e13c4f0b2c00e3fa9760d370fe6ee794047b72716 not found: ID does not exist" containerID="fc6ac43d85f7800c255a478e13c4f0b2c00e3fa9760d370fe6ee794047b72716" Nov 24 11:59:42 crc kubenswrapper[4789]: I1124 11:59:42.557300 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc6ac43d85f7800c255a478e13c4f0b2c00e3fa9760d370fe6ee794047b72716"} err="failed to get container status \"fc6ac43d85f7800c255a478e13c4f0b2c00e3fa9760d370fe6ee794047b72716\": rpc error: code = NotFound desc = could not find container \"fc6ac43d85f7800c255a478e13c4f0b2c00e3fa9760d370fe6ee794047b72716\": container with ID starting with fc6ac43d85f7800c255a478e13c4f0b2c00e3fa9760d370fe6ee794047b72716 not found: ID does not exist" Nov 24 11:59:44 crc kubenswrapper[4789]: I1124 11:59:44.179515 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c521f465-28e0-484b-9a2c-7b7fd1b5f9a1" path="/var/lib/kubelet/pods/c521f465-28e0-484b-9a2c-7b7fd1b5f9a1/volumes" Nov 24 11:59:52 crc kubenswrapper[4789]: I1124 11:59:52.171143 4789 scope.go:117] "RemoveContainer" containerID="35c18d54a6d963863f1131173b65be0814f48cc37a6950d4c230cb7fa15e65d4" Nov 24 11:59:52 crc kubenswrapper[4789]: I1124 11:59:52.525235 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" event={"ID":"30c4a832-f0e4-481b-a474-3ecea86049f6","Type":"ContainerStarted","Data":"838d1706add581c37ff431ed504768d990cd1000bb98f6e1b77849ff344d84b2"} Nov 24 12:00:00 crc kubenswrapper[4789]: I1124 12:00:00.149977 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399760-qwpwl"] Nov 24 12:00:00 crc kubenswrapper[4789]: E1124 12:00:00.151035 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac68af69-d96c-473e-81dd-fae277ed2a11" containerName="extract-utilities" Nov 24 12:00:00 crc kubenswrapper[4789]: I1124 12:00:00.151054 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac68af69-d96c-473e-81dd-fae277ed2a11" containerName="extract-utilities" Nov 24 12:00:00 crc kubenswrapper[4789]: E1124 12:00:00.151064 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c521f465-28e0-484b-9a2c-7b7fd1b5f9a1" containerName="extract-utilities" Nov 24 12:00:00 crc kubenswrapper[4789]: I1124 12:00:00.151072 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="c521f465-28e0-484b-9a2c-7b7fd1b5f9a1" containerName="extract-utilities" Nov 24 12:00:00 crc kubenswrapper[4789]: E1124 12:00:00.151096 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac68af69-d96c-473e-81dd-fae277ed2a11" containerName="extract-content" Nov 24 12:00:00 crc kubenswrapper[4789]: I1124 12:00:00.151105 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac68af69-d96c-473e-81dd-fae277ed2a11" containerName="extract-content" Nov 24 12:00:00 crc kubenswrapper[4789]: E1124 12:00:00.151120 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac68af69-d96c-473e-81dd-fae277ed2a11" containerName="registry-server" Nov 24 12:00:00 crc kubenswrapper[4789]: I1124 12:00:00.151127 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac68af69-d96c-473e-81dd-fae277ed2a11" containerName="registry-server" Nov 24 12:00:00 crc kubenswrapper[4789]: E1124 12:00:00.151142 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c521f465-28e0-484b-9a2c-7b7fd1b5f9a1" containerName="extract-content" Nov 24 12:00:00 crc kubenswrapper[4789]: I1124 12:00:00.151149 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="c521f465-28e0-484b-9a2c-7b7fd1b5f9a1" containerName="extract-content" Nov 24 12:00:00 crc kubenswrapper[4789]: E1124 12:00:00.151182 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c521f465-28e0-484b-9a2c-7b7fd1b5f9a1" containerName="registry-server" Nov 24 12:00:00 crc kubenswrapper[4789]: I1124 12:00:00.151190 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="c521f465-28e0-484b-9a2c-7b7fd1b5f9a1" containerName="registry-server" Nov 24 12:00:00 crc kubenswrapper[4789]: I1124 12:00:00.151398 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac68af69-d96c-473e-81dd-fae277ed2a11" containerName="registry-server" Nov 24 12:00:00 crc kubenswrapper[4789]: I1124 12:00:00.151447 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="c521f465-28e0-484b-9a2c-7b7fd1b5f9a1" containerName="registry-server" Nov 24 12:00:00 crc kubenswrapper[4789]: I1124 12:00:00.152169 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-qwpwl" Nov 24 12:00:00 crc kubenswrapper[4789]: I1124 12:00:00.154953 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 12:00:00 crc kubenswrapper[4789]: I1124 12:00:00.155292 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 12:00:00 crc kubenswrapper[4789]: I1124 12:00:00.164651 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399760-qwpwl"] Nov 24 12:00:00 crc kubenswrapper[4789]: I1124 12:00:00.282671 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ede0de9-f6c6-4f95-a9b5-fbfb352e840c-secret-volume\") pod \"collect-profiles-29399760-qwpwl\" (UID: \"9ede0de9-f6c6-4f95-a9b5-fbfb352e840c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-qwpwl" Nov 24 12:00:00 crc kubenswrapper[4789]: I1124 12:00:00.282766 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ede0de9-f6c6-4f95-a9b5-fbfb352e840c-config-volume\") pod \"collect-profiles-29399760-qwpwl\" (UID: \"9ede0de9-f6c6-4f95-a9b5-fbfb352e840c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-qwpwl" Nov 24 12:00:00 crc kubenswrapper[4789]: I1124 12:00:00.282817 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dvg6\" (UniqueName: \"kubernetes.io/projected/9ede0de9-f6c6-4f95-a9b5-fbfb352e840c-kube-api-access-2dvg6\") pod \"collect-profiles-29399760-qwpwl\" (UID: \"9ede0de9-f6c6-4f95-a9b5-fbfb352e840c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-qwpwl" Nov 24 12:00:00 crc kubenswrapper[4789]: I1124 12:00:00.384436 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ede0de9-f6c6-4f95-a9b5-fbfb352e840c-config-volume\") pod \"collect-profiles-29399760-qwpwl\" (UID: \"9ede0de9-f6c6-4f95-a9b5-fbfb352e840c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-qwpwl" Nov 24 12:00:00 crc kubenswrapper[4789]: I1124 12:00:00.384493 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dvg6\" (UniqueName: \"kubernetes.io/projected/9ede0de9-f6c6-4f95-a9b5-fbfb352e840c-kube-api-access-2dvg6\") pod \"collect-profiles-29399760-qwpwl\" (UID: \"9ede0de9-f6c6-4f95-a9b5-fbfb352e840c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-qwpwl" Nov 24 12:00:00 crc kubenswrapper[4789]: I1124 12:00:00.384589 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ede0de9-f6c6-4f95-a9b5-fbfb352e840c-secret-volume\") pod \"collect-profiles-29399760-qwpwl\" (UID: \"9ede0de9-f6c6-4f95-a9b5-fbfb352e840c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-qwpwl" Nov 24 12:00:00 crc kubenswrapper[4789]: I1124 12:00:00.385380 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ede0de9-f6c6-4f95-a9b5-fbfb352e840c-config-volume\") pod \"collect-profiles-29399760-qwpwl\" (UID: \"9ede0de9-f6c6-4f95-a9b5-fbfb352e840c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-qwpwl" Nov 24 12:00:00 crc kubenswrapper[4789]: I1124 12:00:00.390678 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ede0de9-f6c6-4f95-a9b5-fbfb352e840c-secret-volume\") pod \"collect-profiles-29399760-qwpwl\" (UID: \"9ede0de9-f6c6-4f95-a9b5-fbfb352e840c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-qwpwl" Nov 24 12:00:00 crc kubenswrapper[4789]: I1124 12:00:00.403646 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dvg6\" (UniqueName: \"kubernetes.io/projected/9ede0de9-f6c6-4f95-a9b5-fbfb352e840c-kube-api-access-2dvg6\") pod \"collect-profiles-29399760-qwpwl\" (UID: \"9ede0de9-f6c6-4f95-a9b5-fbfb352e840c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-qwpwl" Nov 24 12:00:00 crc kubenswrapper[4789]: I1124 12:00:00.481888 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-qwpwl" Nov 24 12:00:00 crc kubenswrapper[4789]: I1124 12:00:00.955162 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399760-qwpwl"] Nov 24 12:00:01 crc kubenswrapper[4789]: I1124 12:00:01.614714 4789 generic.go:334] "Generic (PLEG): container finished" podID="9ede0de9-f6c6-4f95-a9b5-fbfb352e840c" containerID="c62a27f5905882df7fa10b77361fdcedf1975ec99d6b2b9938e071c9c24897c7" exitCode=0 Nov 24 12:00:01 crc kubenswrapper[4789]: I1124 12:00:01.614912 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-qwpwl" event={"ID":"9ede0de9-f6c6-4f95-a9b5-fbfb352e840c","Type":"ContainerDied","Data":"c62a27f5905882df7fa10b77361fdcedf1975ec99d6b2b9938e071c9c24897c7"} Nov 24 12:00:01 crc kubenswrapper[4789]: I1124 12:00:01.615268 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-qwpwl" event={"ID":"9ede0de9-f6c6-4f95-a9b5-fbfb352e840c","Type":"ContainerStarted","Data":"d1446134e694b3ec20e4b5dfc90c07f0547991743a38859bf570f0b9432ecdde"} Nov 24 12:00:02 crc kubenswrapper[4789]: I1124 12:00:02.937750 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-qwpwl" Nov 24 12:00:03 crc kubenswrapper[4789]: I1124 12:00:03.033988 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ede0de9-f6c6-4f95-a9b5-fbfb352e840c-secret-volume\") pod \"9ede0de9-f6c6-4f95-a9b5-fbfb352e840c\" (UID: \"9ede0de9-f6c6-4f95-a9b5-fbfb352e840c\") " Nov 24 12:00:03 crc kubenswrapper[4789]: I1124 12:00:03.034322 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dvg6\" (UniqueName: \"kubernetes.io/projected/9ede0de9-f6c6-4f95-a9b5-fbfb352e840c-kube-api-access-2dvg6\") pod \"9ede0de9-f6c6-4f95-a9b5-fbfb352e840c\" (UID: \"9ede0de9-f6c6-4f95-a9b5-fbfb352e840c\") " Nov 24 12:00:03 crc kubenswrapper[4789]: I1124 12:00:03.034399 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ede0de9-f6c6-4f95-a9b5-fbfb352e840c-config-volume\") pod \"9ede0de9-f6c6-4f95-a9b5-fbfb352e840c\" (UID: \"9ede0de9-f6c6-4f95-a9b5-fbfb352e840c\") " Nov 24 12:00:03 crc kubenswrapper[4789]: I1124 12:00:03.035604 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ede0de9-f6c6-4f95-a9b5-fbfb352e840c-config-volume" (OuterVolumeSpecName: "config-volume") pod "9ede0de9-f6c6-4f95-a9b5-fbfb352e840c" (UID: "9ede0de9-f6c6-4f95-a9b5-fbfb352e840c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:00:03 crc kubenswrapper[4789]: I1124 12:00:03.039490 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ede0de9-f6c6-4f95-a9b5-fbfb352e840c-kube-api-access-2dvg6" (OuterVolumeSpecName: "kube-api-access-2dvg6") pod "9ede0de9-f6c6-4f95-a9b5-fbfb352e840c" (UID: "9ede0de9-f6c6-4f95-a9b5-fbfb352e840c"). InnerVolumeSpecName "kube-api-access-2dvg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:00:03 crc kubenswrapper[4789]: I1124 12:00:03.039889 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ede0de9-f6c6-4f95-a9b5-fbfb352e840c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9ede0de9-f6c6-4f95-a9b5-fbfb352e840c" (UID: "9ede0de9-f6c6-4f95-a9b5-fbfb352e840c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:00:03 crc kubenswrapper[4789]: I1124 12:00:03.136402 4789 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ede0de9-f6c6-4f95-a9b5-fbfb352e840c-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 12:00:03 crc kubenswrapper[4789]: I1124 12:00:03.136434 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dvg6\" (UniqueName: \"kubernetes.io/projected/9ede0de9-f6c6-4f95-a9b5-fbfb352e840c-kube-api-access-2dvg6\") on node \"crc\" DevicePath \"\"" Nov 24 12:00:03 crc kubenswrapper[4789]: I1124 12:00:03.136445 4789 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ede0de9-f6c6-4f95-a9b5-fbfb352e840c-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 12:00:03 crc kubenswrapper[4789]: I1124 12:00:03.632588 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-qwpwl" event={"ID":"9ede0de9-f6c6-4f95-a9b5-fbfb352e840c","Type":"ContainerDied","Data":"d1446134e694b3ec20e4b5dfc90c07f0547991743a38859bf570f0b9432ecdde"} Nov 24 12:00:03 crc kubenswrapper[4789]: I1124 12:00:03.632637 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1446134e694b3ec20e4b5dfc90c07f0547991743a38859bf570f0b9432ecdde" Nov 24 12:00:03 crc kubenswrapper[4789]: I1124 12:00:03.632700 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-qwpwl" Nov 24 12:01:00 crc kubenswrapper[4789]: I1124 12:01:00.159992 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29399761-jl7gf"] Nov 24 12:01:00 crc kubenswrapper[4789]: E1124 12:01:00.162752 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ede0de9-f6c6-4f95-a9b5-fbfb352e840c" containerName="collect-profiles" Nov 24 12:01:00 crc kubenswrapper[4789]: I1124 12:01:00.162779 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ede0de9-f6c6-4f95-a9b5-fbfb352e840c" containerName="collect-profiles" Nov 24 12:01:00 crc kubenswrapper[4789]: I1124 12:01:00.163010 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ede0de9-f6c6-4f95-a9b5-fbfb352e840c" containerName="collect-profiles" Nov 24 12:01:00 crc kubenswrapper[4789]: I1124 12:01:00.163903 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29399761-jl7gf" Nov 24 12:01:00 crc kubenswrapper[4789]: I1124 12:01:00.181102 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29399761-jl7gf"] Nov 24 12:01:00 crc kubenswrapper[4789]: I1124 12:01:00.217503 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7681045f-7adf-4600-8e53-95b0d13f959b-config-data\") pod \"keystone-cron-29399761-jl7gf\" (UID: \"7681045f-7adf-4600-8e53-95b0d13f959b\") " pod="openstack/keystone-cron-29399761-jl7gf" Nov 24 12:01:00 crc kubenswrapper[4789]: I1124 12:01:00.217586 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxjjq\" (UniqueName: \"kubernetes.io/projected/7681045f-7adf-4600-8e53-95b0d13f959b-kube-api-access-pxjjq\") pod \"keystone-cron-29399761-jl7gf\" (UID: \"7681045f-7adf-4600-8e53-95b0d13f959b\") " pod="openstack/keystone-cron-29399761-jl7gf" Nov 24 12:01:00 crc kubenswrapper[4789]: I1124 12:01:00.217612 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7681045f-7adf-4600-8e53-95b0d13f959b-combined-ca-bundle\") pod \"keystone-cron-29399761-jl7gf\" (UID: \"7681045f-7adf-4600-8e53-95b0d13f959b\") " pod="openstack/keystone-cron-29399761-jl7gf" Nov 24 12:01:00 crc kubenswrapper[4789]: I1124 12:01:00.217630 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7681045f-7adf-4600-8e53-95b0d13f959b-fernet-keys\") pod \"keystone-cron-29399761-jl7gf\" (UID: \"7681045f-7adf-4600-8e53-95b0d13f959b\") " pod="openstack/keystone-cron-29399761-jl7gf" Nov 24 12:01:00 crc kubenswrapper[4789]: I1124 12:01:00.318632 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7681045f-7adf-4600-8e53-95b0d13f959b-config-data\") pod \"keystone-cron-29399761-jl7gf\" (UID: \"7681045f-7adf-4600-8e53-95b0d13f959b\") " pod="openstack/keystone-cron-29399761-jl7gf" Nov 24 12:01:00 crc kubenswrapper[4789]: I1124 12:01:00.318677 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxjjq\" (UniqueName: \"kubernetes.io/projected/7681045f-7adf-4600-8e53-95b0d13f959b-kube-api-access-pxjjq\") pod \"keystone-cron-29399761-jl7gf\" (UID: \"7681045f-7adf-4600-8e53-95b0d13f959b\") " pod="openstack/keystone-cron-29399761-jl7gf" Nov 24 12:01:00 crc kubenswrapper[4789]: I1124 12:01:00.318699 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7681045f-7adf-4600-8e53-95b0d13f959b-combined-ca-bundle\") pod \"keystone-cron-29399761-jl7gf\" (UID: \"7681045f-7adf-4600-8e53-95b0d13f959b\") " pod="openstack/keystone-cron-29399761-jl7gf" Nov 24 12:01:00 crc kubenswrapper[4789]: I1124 12:01:00.318717 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7681045f-7adf-4600-8e53-95b0d13f959b-fernet-keys\") pod \"keystone-cron-29399761-jl7gf\" (UID: \"7681045f-7adf-4600-8e53-95b0d13f959b\") " pod="openstack/keystone-cron-29399761-jl7gf" Nov 24 12:01:00 crc kubenswrapper[4789]: I1124 12:01:00.327149 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7681045f-7adf-4600-8e53-95b0d13f959b-combined-ca-bundle\") pod \"keystone-cron-29399761-jl7gf\" (UID: \"7681045f-7adf-4600-8e53-95b0d13f959b\") " pod="openstack/keystone-cron-29399761-jl7gf" Nov 24 12:01:00 crc kubenswrapper[4789]: I1124 12:01:00.333809 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7681045f-7adf-4600-8e53-95b0d13f959b-fernet-keys\") pod \"keystone-cron-29399761-jl7gf\" (UID: \"7681045f-7adf-4600-8e53-95b0d13f959b\") " pod="openstack/keystone-cron-29399761-jl7gf" Nov 24 12:01:00 crc kubenswrapper[4789]: I1124 12:01:00.340766 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7681045f-7adf-4600-8e53-95b0d13f959b-config-data\") pod \"keystone-cron-29399761-jl7gf\" (UID: \"7681045f-7adf-4600-8e53-95b0d13f959b\") " pod="openstack/keystone-cron-29399761-jl7gf" Nov 24 12:01:00 crc kubenswrapper[4789]: I1124 12:01:00.349097 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxjjq\" (UniqueName: \"kubernetes.io/projected/7681045f-7adf-4600-8e53-95b0d13f959b-kube-api-access-pxjjq\") pod \"keystone-cron-29399761-jl7gf\" (UID: \"7681045f-7adf-4600-8e53-95b0d13f959b\") " pod="openstack/keystone-cron-29399761-jl7gf" Nov 24 12:01:00 crc kubenswrapper[4789]: I1124 12:01:00.497129 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29399761-jl7gf" Nov 24 12:01:00 crc kubenswrapper[4789]: I1124 12:01:00.964642 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29399761-jl7gf"] Nov 24 12:01:01 crc kubenswrapper[4789]: I1124 12:01:01.123440 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29399761-jl7gf" event={"ID":"7681045f-7adf-4600-8e53-95b0d13f959b","Type":"ContainerStarted","Data":"600b7b176d354d2b1fa732c655e9c2b38784316a750476390d5bc8c38583eb89"} Nov 24 12:01:02 crc kubenswrapper[4789]: I1124 12:01:02.137418 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29399761-jl7gf" event={"ID":"7681045f-7adf-4600-8e53-95b0d13f959b","Type":"ContainerStarted","Data":"cc70a952a105a681d14428f4b08544941b002fc215b1d1be40a739aa018f5376"} Nov 24 12:01:02 crc kubenswrapper[4789]: I1124 12:01:02.158870 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29399761-jl7gf" podStartSLOduration=2.158854567 podStartE2EDuration="2.158854567s" podCreationTimestamp="2025-11-24 12:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:02.152863579 +0000 UTC m=+1844.735334978" watchObservedRunningTime="2025-11-24 12:01:02.158854567 +0000 UTC m=+1844.741325946" Nov 24 12:01:04 crc kubenswrapper[4789]: I1124 12:01:04.166014 4789 generic.go:334] "Generic (PLEG): container finished" podID="7681045f-7adf-4600-8e53-95b0d13f959b" containerID="cc70a952a105a681d14428f4b08544941b002fc215b1d1be40a739aa018f5376" exitCode=0 Nov 24 12:01:04 crc kubenswrapper[4789]: I1124 12:01:04.166079 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29399761-jl7gf" event={"ID":"7681045f-7adf-4600-8e53-95b0d13f959b","Type":"ContainerDied","Data":"cc70a952a105a681d14428f4b08544941b002fc215b1d1be40a739aa018f5376"} Nov 24 12:01:05 crc kubenswrapper[4789]: I1124 12:01:05.481699 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29399761-jl7gf" Nov 24 12:01:05 crc kubenswrapper[4789]: I1124 12:01:05.522593 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7681045f-7adf-4600-8e53-95b0d13f959b-config-data\") pod \"7681045f-7adf-4600-8e53-95b0d13f959b\" (UID: \"7681045f-7adf-4600-8e53-95b0d13f959b\") " Nov 24 12:01:05 crc kubenswrapper[4789]: I1124 12:01:05.522735 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7681045f-7adf-4600-8e53-95b0d13f959b-combined-ca-bundle\") pod \"7681045f-7adf-4600-8e53-95b0d13f959b\" (UID: \"7681045f-7adf-4600-8e53-95b0d13f959b\") " Nov 24 12:01:05 crc kubenswrapper[4789]: I1124 12:01:05.522828 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7681045f-7adf-4600-8e53-95b0d13f959b-fernet-keys\") pod \"7681045f-7adf-4600-8e53-95b0d13f959b\" (UID: \"7681045f-7adf-4600-8e53-95b0d13f959b\") " Nov 24 12:01:05 crc kubenswrapper[4789]: I1124 12:01:05.522873 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxjjq\" (UniqueName: \"kubernetes.io/projected/7681045f-7adf-4600-8e53-95b0d13f959b-kube-api-access-pxjjq\") pod \"7681045f-7adf-4600-8e53-95b0d13f959b\" (UID: \"7681045f-7adf-4600-8e53-95b0d13f959b\") " Nov 24 12:01:05 crc kubenswrapper[4789]: I1124 12:01:05.529232 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7681045f-7adf-4600-8e53-95b0d13f959b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7681045f-7adf-4600-8e53-95b0d13f959b" (UID: "7681045f-7adf-4600-8e53-95b0d13f959b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:01:05 crc kubenswrapper[4789]: I1124 12:01:05.531010 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7681045f-7adf-4600-8e53-95b0d13f959b-kube-api-access-pxjjq" (OuterVolumeSpecName: "kube-api-access-pxjjq") pod "7681045f-7adf-4600-8e53-95b0d13f959b" (UID: "7681045f-7adf-4600-8e53-95b0d13f959b"). InnerVolumeSpecName "kube-api-access-pxjjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:01:05 crc kubenswrapper[4789]: I1124 12:01:05.550596 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7681045f-7adf-4600-8e53-95b0d13f959b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7681045f-7adf-4600-8e53-95b0d13f959b" (UID: "7681045f-7adf-4600-8e53-95b0d13f959b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:01:05 crc kubenswrapper[4789]: I1124 12:01:05.580861 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7681045f-7adf-4600-8e53-95b0d13f959b-config-data" (OuterVolumeSpecName: "config-data") pod "7681045f-7adf-4600-8e53-95b0d13f959b" (UID: "7681045f-7adf-4600-8e53-95b0d13f959b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:01:05 crc kubenswrapper[4789]: I1124 12:01:05.625612 4789 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7681045f-7adf-4600-8e53-95b0d13f959b-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:01:05 crc kubenswrapper[4789]: I1124 12:01:05.625643 4789 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7681045f-7adf-4600-8e53-95b0d13f959b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:01:05 crc kubenswrapper[4789]: I1124 12:01:05.625654 4789 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7681045f-7adf-4600-8e53-95b0d13f959b-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 12:01:05 crc kubenswrapper[4789]: I1124 12:01:05.625663 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxjjq\" (UniqueName: \"kubernetes.io/projected/7681045f-7adf-4600-8e53-95b0d13f959b-kube-api-access-pxjjq\") on node \"crc\" DevicePath \"\"" Nov 24 12:01:06 crc kubenswrapper[4789]: I1124 12:01:06.182151 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29399761-jl7gf" event={"ID":"7681045f-7adf-4600-8e53-95b0d13f959b","Type":"ContainerDied","Data":"600b7b176d354d2b1fa732c655e9c2b38784316a750476390d5bc8c38583eb89"} Nov 24 12:01:06 crc kubenswrapper[4789]: I1124 12:01:06.182502 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="600b7b176d354d2b1fa732c655e9c2b38784316a750476390d5bc8c38583eb89" Nov 24 12:01:06 crc kubenswrapper[4789]: I1124 12:01:06.182192 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29399761-jl7gf" Nov 24 12:02:20 crc kubenswrapper[4789]: I1124 12:02:20.162177 4789 patch_prober.go:28] interesting pod/machine-config-daemon-9czvn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:02:20 crc kubenswrapper[4789]: I1124 12:02:20.162857 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:02:50 crc kubenswrapper[4789]: I1124 12:02:50.162143 4789 patch_prober.go:28] interesting pod/machine-config-daemon-9czvn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:02:50 crc kubenswrapper[4789]: I1124 12:02:50.162685 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:03:20 crc kubenswrapper[4789]: I1124 12:03:20.163106 4789 patch_prober.go:28] interesting pod/machine-config-daemon-9czvn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:03:20 crc kubenswrapper[4789]: I1124 12:03:20.163672 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:03:20 crc kubenswrapper[4789]: I1124 12:03:20.163723 4789 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" Nov 24 12:03:20 crc kubenswrapper[4789]: I1124 12:03:20.164538 4789 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"838d1706add581c37ff431ed504768d990cd1000bb98f6e1b77849ff344d84b2"} pod="openshift-machine-config-operator/machine-config-daemon-9czvn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:03:20 crc kubenswrapper[4789]: I1124 12:03:20.164594 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" containerID="cri-o://838d1706add581c37ff431ed504768d990cd1000bb98f6e1b77849ff344d84b2" gracePeriod=600 Nov 24 12:03:21 crc kubenswrapper[4789]: I1124 12:03:21.243046 4789 generic.go:334] "Generic (PLEG): container finished" podID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerID="838d1706add581c37ff431ed504768d990cd1000bb98f6e1b77849ff344d84b2" exitCode=0 Nov 24 12:03:21 crc kubenswrapper[4789]: I1124 12:03:21.243110 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" event={"ID":"30c4a832-f0e4-481b-a474-3ecea86049f6","Type":"ContainerDied","Data":"838d1706add581c37ff431ed504768d990cd1000bb98f6e1b77849ff344d84b2"} Nov 24 12:03:21 crc kubenswrapper[4789]: I1124 12:03:21.243571 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" event={"ID":"30c4a832-f0e4-481b-a474-3ecea86049f6","Type":"ContainerStarted","Data":"e0548ff4b57302caa6b7a362f06382ae8c3563988da3b37011e15cb6b4702acd"} Nov 24 12:03:21 crc kubenswrapper[4789]: I1124 12:03:21.243596 4789 scope.go:117] "RemoveContainer" containerID="35c18d54a6d963863f1131173b65be0814f48cc37a6950d4c230cb7fa15e65d4" Nov 24 12:04:13 crc kubenswrapper[4789]: I1124 12:04:13.246072 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ddgtb"] Nov 24 12:04:13 crc kubenswrapper[4789]: E1124 12:04:13.247342 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7681045f-7adf-4600-8e53-95b0d13f959b" containerName="keystone-cron" Nov 24 12:04:13 crc kubenswrapper[4789]: I1124 12:04:13.247371 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="7681045f-7adf-4600-8e53-95b0d13f959b" containerName="keystone-cron" Nov 24 12:04:13 crc kubenswrapper[4789]: I1124 12:04:13.247677 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="7681045f-7adf-4600-8e53-95b0d13f959b" containerName="keystone-cron" Nov 24 12:04:13 crc kubenswrapper[4789]: I1124 12:04:13.249733 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddgtb" Nov 24 12:04:13 crc kubenswrapper[4789]: I1124 12:04:13.266367 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ddgtb"] Nov 24 12:04:13 crc kubenswrapper[4789]: I1124 12:04:13.349274 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8bz4\" (UniqueName: \"kubernetes.io/projected/5ed5071c-3dfe-4611-8663-c28122049a8a-kube-api-access-q8bz4\") pod \"redhat-marketplace-ddgtb\" (UID: \"5ed5071c-3dfe-4611-8663-c28122049a8a\") " pod="openshift-marketplace/redhat-marketplace-ddgtb" Nov 24 12:04:13 crc kubenswrapper[4789]: I1124 12:04:13.349370 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ed5071c-3dfe-4611-8663-c28122049a8a-utilities\") pod \"redhat-marketplace-ddgtb\" (UID: \"5ed5071c-3dfe-4611-8663-c28122049a8a\") " pod="openshift-marketplace/redhat-marketplace-ddgtb" Nov 24 12:04:13 crc kubenswrapper[4789]: I1124 12:04:13.349489 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ed5071c-3dfe-4611-8663-c28122049a8a-catalog-content\") pod \"redhat-marketplace-ddgtb\" (UID: \"5ed5071c-3dfe-4611-8663-c28122049a8a\") " pod="openshift-marketplace/redhat-marketplace-ddgtb" Nov 24 12:04:13 crc kubenswrapper[4789]: I1124 12:04:13.451622 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8bz4\" (UniqueName: \"kubernetes.io/projected/5ed5071c-3dfe-4611-8663-c28122049a8a-kube-api-access-q8bz4\") pod \"redhat-marketplace-ddgtb\" (UID: \"5ed5071c-3dfe-4611-8663-c28122049a8a\") " pod="openshift-marketplace/redhat-marketplace-ddgtb" Nov 24 12:04:13 crc kubenswrapper[4789]: I1124 12:04:13.451704 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ed5071c-3dfe-4611-8663-c28122049a8a-utilities\") pod \"redhat-marketplace-ddgtb\" (UID: \"5ed5071c-3dfe-4611-8663-c28122049a8a\") " pod="openshift-marketplace/redhat-marketplace-ddgtb" Nov 24 12:04:13 crc kubenswrapper[4789]: I1124 12:04:13.451760 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ed5071c-3dfe-4611-8663-c28122049a8a-catalog-content\") pod \"redhat-marketplace-ddgtb\" (UID: \"5ed5071c-3dfe-4611-8663-c28122049a8a\") " pod="openshift-marketplace/redhat-marketplace-ddgtb" Nov 24 12:04:13 crc kubenswrapper[4789]: I1124 12:04:13.452268 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ed5071c-3dfe-4611-8663-c28122049a8a-utilities\") pod \"redhat-marketplace-ddgtb\" (UID: \"5ed5071c-3dfe-4611-8663-c28122049a8a\") " pod="openshift-marketplace/redhat-marketplace-ddgtb" Nov 24 12:04:13 crc kubenswrapper[4789]: I1124 12:04:13.452365 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ed5071c-3dfe-4611-8663-c28122049a8a-catalog-content\") pod \"redhat-marketplace-ddgtb\" (UID: \"5ed5071c-3dfe-4611-8663-c28122049a8a\") " pod="openshift-marketplace/redhat-marketplace-ddgtb" Nov 24 12:04:13 crc kubenswrapper[4789]: I1124 12:04:13.471435 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8bz4\" (UniqueName: \"kubernetes.io/projected/5ed5071c-3dfe-4611-8663-c28122049a8a-kube-api-access-q8bz4\") pod \"redhat-marketplace-ddgtb\" (UID: \"5ed5071c-3dfe-4611-8663-c28122049a8a\") " pod="openshift-marketplace/redhat-marketplace-ddgtb" Nov 24 12:04:13 crc kubenswrapper[4789]: I1124 12:04:13.581719 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddgtb" Nov 24 12:04:14 crc kubenswrapper[4789]: I1124 12:04:14.079762 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ddgtb"] Nov 24 12:04:14 crc kubenswrapper[4789]: I1124 12:04:14.709732 4789 generic.go:334] "Generic (PLEG): container finished" podID="5ed5071c-3dfe-4611-8663-c28122049a8a" containerID="076299e0d9d45b8b77f3a9173f12adea4bc6058f315120aedacc458b0564bd5f" exitCode=0 Nov 24 12:04:14 crc kubenswrapper[4789]: I1124 12:04:14.710098 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddgtb" event={"ID":"5ed5071c-3dfe-4611-8663-c28122049a8a","Type":"ContainerDied","Data":"076299e0d9d45b8b77f3a9173f12adea4bc6058f315120aedacc458b0564bd5f"} Nov 24 12:04:14 crc kubenswrapper[4789]: I1124 12:04:14.710129 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddgtb" event={"ID":"5ed5071c-3dfe-4611-8663-c28122049a8a","Type":"ContainerStarted","Data":"8e0fa8a37e78fcd797dd3c83ce673e1931947b67d593e2e0e433eb65cd969b16"} Nov 24 12:04:14 crc kubenswrapper[4789]: I1124 12:04:14.714176 4789 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 12:04:15 crc kubenswrapper[4789]: I1124 12:04:15.719746 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddgtb" event={"ID":"5ed5071c-3dfe-4611-8663-c28122049a8a","Type":"ContainerStarted","Data":"567b60bdabd9fc2381048c280f37f8b5a7f2548e2f31a4f5570c6b13793d0504"} Nov 24 12:04:16 crc kubenswrapper[4789]: I1124 12:04:16.733066 4789 generic.go:334] "Generic (PLEG): container finished" podID="5ed5071c-3dfe-4611-8663-c28122049a8a" containerID="567b60bdabd9fc2381048c280f37f8b5a7f2548e2f31a4f5570c6b13793d0504" exitCode=0 Nov 24 12:04:16 crc kubenswrapper[4789]: I1124 12:04:16.733509 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddgtb" event={"ID":"5ed5071c-3dfe-4611-8663-c28122049a8a","Type":"ContainerDied","Data":"567b60bdabd9fc2381048c280f37f8b5a7f2548e2f31a4f5570c6b13793d0504"} Nov 24 12:04:17 crc kubenswrapper[4789]: I1124 12:04:17.761249 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddgtb" event={"ID":"5ed5071c-3dfe-4611-8663-c28122049a8a","Type":"ContainerStarted","Data":"6794c69e8006f54bc134d58238fe0940ffaed0970cf39a4dced011d0bda68333"} Nov 24 12:04:17 crc kubenswrapper[4789]: I1124 12:04:17.788352 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ddgtb" podStartSLOduration=2.288951197 podStartE2EDuration="4.788307235s" podCreationTimestamp="2025-11-24 12:04:13 +0000 UTC" firstStartedPulling="2025-11-24 12:04:14.713790653 +0000 UTC m=+2037.296262042" lastFinishedPulling="2025-11-24 12:04:17.213146711 +0000 UTC m=+2039.795618080" observedRunningTime="2025-11-24 12:04:17.782213485 +0000 UTC m=+2040.364684864" watchObservedRunningTime="2025-11-24 12:04:17.788307235 +0000 UTC m=+2040.370778614" Nov 24 12:04:23 crc kubenswrapper[4789]: I1124 12:04:23.582854 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ddgtb" Nov 24 12:04:23 crc kubenswrapper[4789]: I1124 12:04:23.583499 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ddgtb" Nov 24 12:04:23 crc kubenswrapper[4789]: I1124 12:04:23.667779 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ddgtb" Nov 24 12:04:23 crc kubenswrapper[4789]: I1124 12:04:23.880600 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ddgtb" Nov 24 12:04:23 crc kubenswrapper[4789]: I1124 12:04:23.938398 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ddgtb"] Nov 24 12:04:25 crc kubenswrapper[4789]: I1124 12:04:25.840346 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ddgtb" podUID="5ed5071c-3dfe-4611-8663-c28122049a8a" containerName="registry-server" containerID="cri-o://6794c69e8006f54bc134d58238fe0940ffaed0970cf39a4dced011d0bda68333" gracePeriod=2 Nov 24 12:04:26 crc kubenswrapper[4789]: I1124 12:04:26.274098 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddgtb" Nov 24 12:04:26 crc kubenswrapper[4789]: I1124 12:04:26.405960 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ed5071c-3dfe-4611-8663-c28122049a8a-utilities\") pod \"5ed5071c-3dfe-4611-8663-c28122049a8a\" (UID: \"5ed5071c-3dfe-4611-8663-c28122049a8a\") " Nov 24 12:04:26 crc kubenswrapper[4789]: I1124 12:04:26.406029 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8bz4\" (UniqueName: \"kubernetes.io/projected/5ed5071c-3dfe-4611-8663-c28122049a8a-kube-api-access-q8bz4\") pod \"5ed5071c-3dfe-4611-8663-c28122049a8a\" (UID: \"5ed5071c-3dfe-4611-8663-c28122049a8a\") " Nov 24 12:04:26 crc kubenswrapper[4789]: I1124 12:04:26.406189 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ed5071c-3dfe-4611-8663-c28122049a8a-catalog-content\") pod \"5ed5071c-3dfe-4611-8663-c28122049a8a\" (UID: \"5ed5071c-3dfe-4611-8663-c28122049a8a\") " Nov 24 12:04:26 crc kubenswrapper[4789]: I1124 12:04:26.406756 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ed5071c-3dfe-4611-8663-c28122049a8a-utilities" (OuterVolumeSpecName: "utilities") pod "5ed5071c-3dfe-4611-8663-c28122049a8a" (UID: "5ed5071c-3dfe-4611-8663-c28122049a8a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:04:26 crc kubenswrapper[4789]: I1124 12:04:26.421101 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ed5071c-3dfe-4611-8663-c28122049a8a-kube-api-access-q8bz4" (OuterVolumeSpecName: "kube-api-access-q8bz4") pod "5ed5071c-3dfe-4611-8663-c28122049a8a" (UID: "5ed5071c-3dfe-4611-8663-c28122049a8a"). InnerVolumeSpecName "kube-api-access-q8bz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:04:26 crc kubenswrapper[4789]: I1124 12:04:26.426848 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ed5071c-3dfe-4611-8663-c28122049a8a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ed5071c-3dfe-4611-8663-c28122049a8a" (UID: "5ed5071c-3dfe-4611-8663-c28122049a8a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:04:26 crc kubenswrapper[4789]: I1124 12:04:26.507954 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ed5071c-3dfe-4611-8663-c28122049a8a-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:04:26 crc kubenswrapper[4789]: I1124 12:04:26.507984 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8bz4\" (UniqueName: \"kubernetes.io/projected/5ed5071c-3dfe-4611-8663-c28122049a8a-kube-api-access-q8bz4\") on node \"crc\" DevicePath \"\"" Nov 24 12:04:26 crc kubenswrapper[4789]: I1124 12:04:26.507995 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ed5071c-3dfe-4611-8663-c28122049a8a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:04:26 crc kubenswrapper[4789]: I1124 12:04:26.854208 4789 generic.go:334] "Generic (PLEG): container finished" podID="5ed5071c-3dfe-4611-8663-c28122049a8a" containerID="6794c69e8006f54bc134d58238fe0940ffaed0970cf39a4dced011d0bda68333" exitCode=0 Nov 24 12:04:26 crc kubenswrapper[4789]: I1124 12:04:26.854247 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddgtb" Nov 24 12:04:26 crc kubenswrapper[4789]: I1124 12:04:26.854256 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddgtb" event={"ID":"5ed5071c-3dfe-4611-8663-c28122049a8a","Type":"ContainerDied","Data":"6794c69e8006f54bc134d58238fe0940ffaed0970cf39a4dced011d0bda68333"} Nov 24 12:04:26 crc kubenswrapper[4789]: I1124 12:04:26.854307 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddgtb" event={"ID":"5ed5071c-3dfe-4611-8663-c28122049a8a","Type":"ContainerDied","Data":"8e0fa8a37e78fcd797dd3c83ce673e1931947b67d593e2e0e433eb65cd969b16"} Nov 24 12:04:26 crc kubenswrapper[4789]: I1124 12:04:26.854347 4789 scope.go:117] "RemoveContainer" containerID="6794c69e8006f54bc134d58238fe0940ffaed0970cf39a4dced011d0bda68333" Nov 24 12:04:26 crc kubenswrapper[4789]: I1124 12:04:26.905638 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ddgtb"] Nov 24 12:04:26 crc kubenswrapper[4789]: I1124 12:04:26.906091 4789 scope.go:117] "RemoveContainer" containerID="567b60bdabd9fc2381048c280f37f8b5a7f2548e2f31a4f5570c6b13793d0504" Nov 24 12:04:26 crc kubenswrapper[4789]: I1124 12:04:26.921184 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ddgtb"] Nov 24 12:04:26 crc kubenswrapper[4789]: I1124 12:04:26.933962 4789 scope.go:117] "RemoveContainer" containerID="076299e0d9d45b8b77f3a9173f12adea4bc6058f315120aedacc458b0564bd5f" Nov 24 12:04:26 crc kubenswrapper[4789]: I1124 12:04:26.968433 4789 scope.go:117] "RemoveContainer" containerID="6794c69e8006f54bc134d58238fe0940ffaed0970cf39a4dced011d0bda68333" Nov 24 12:04:26 crc kubenswrapper[4789]: E1124 12:04:26.968953 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6794c69e8006f54bc134d58238fe0940ffaed0970cf39a4dced011d0bda68333\": container with ID starting with 6794c69e8006f54bc134d58238fe0940ffaed0970cf39a4dced011d0bda68333 not found: ID does not exist" containerID="6794c69e8006f54bc134d58238fe0940ffaed0970cf39a4dced011d0bda68333" Nov 24 12:04:26 crc kubenswrapper[4789]: I1124 12:04:26.968990 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6794c69e8006f54bc134d58238fe0940ffaed0970cf39a4dced011d0bda68333"} err="failed to get container status \"6794c69e8006f54bc134d58238fe0940ffaed0970cf39a4dced011d0bda68333\": rpc error: code = NotFound desc = could not find container \"6794c69e8006f54bc134d58238fe0940ffaed0970cf39a4dced011d0bda68333\": container with ID starting with 6794c69e8006f54bc134d58238fe0940ffaed0970cf39a4dced011d0bda68333 not found: ID does not exist" Nov 24 12:04:26 crc kubenswrapper[4789]: I1124 12:04:26.969022 4789 scope.go:117] "RemoveContainer" containerID="567b60bdabd9fc2381048c280f37f8b5a7f2548e2f31a4f5570c6b13793d0504" Nov 24 12:04:26 crc kubenswrapper[4789]: E1124 12:04:26.969405 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"567b60bdabd9fc2381048c280f37f8b5a7f2548e2f31a4f5570c6b13793d0504\": container with ID starting with 567b60bdabd9fc2381048c280f37f8b5a7f2548e2f31a4f5570c6b13793d0504 not found: ID does not exist" containerID="567b60bdabd9fc2381048c280f37f8b5a7f2548e2f31a4f5570c6b13793d0504" Nov 24 12:04:26 crc kubenswrapper[4789]: I1124 12:04:26.969436 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"567b60bdabd9fc2381048c280f37f8b5a7f2548e2f31a4f5570c6b13793d0504"} err="failed to get container status \"567b60bdabd9fc2381048c280f37f8b5a7f2548e2f31a4f5570c6b13793d0504\": rpc error: code = NotFound desc = could not find container \"567b60bdabd9fc2381048c280f37f8b5a7f2548e2f31a4f5570c6b13793d0504\": container with ID starting with 567b60bdabd9fc2381048c280f37f8b5a7f2548e2f31a4f5570c6b13793d0504 not found: ID does not exist" Nov 24 12:04:26 crc kubenswrapper[4789]: I1124 12:04:26.969500 4789 scope.go:117] "RemoveContainer" containerID="076299e0d9d45b8b77f3a9173f12adea4bc6058f315120aedacc458b0564bd5f" Nov 24 12:04:26 crc kubenswrapper[4789]: E1124 12:04:26.969780 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"076299e0d9d45b8b77f3a9173f12adea4bc6058f315120aedacc458b0564bd5f\": container with ID starting with 076299e0d9d45b8b77f3a9173f12adea4bc6058f315120aedacc458b0564bd5f not found: ID does not exist" containerID="076299e0d9d45b8b77f3a9173f12adea4bc6058f315120aedacc458b0564bd5f" Nov 24 12:04:26 crc kubenswrapper[4789]: I1124 12:04:26.969807 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"076299e0d9d45b8b77f3a9173f12adea4bc6058f315120aedacc458b0564bd5f"} err="failed to get container status \"076299e0d9d45b8b77f3a9173f12adea4bc6058f315120aedacc458b0564bd5f\": rpc error: code = NotFound desc = could not find container \"076299e0d9d45b8b77f3a9173f12adea4bc6058f315120aedacc458b0564bd5f\": container with ID starting with 076299e0d9d45b8b77f3a9173f12adea4bc6058f315120aedacc458b0564bd5f not found: ID does not exist" Nov 24 12:04:28 crc kubenswrapper[4789]: I1124 12:04:28.186173 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ed5071c-3dfe-4611-8663-c28122049a8a" path="/var/lib/kubelet/pods/5ed5071c-3dfe-4611-8663-c28122049a8a/volumes" Nov 24 12:04:29 crc kubenswrapper[4789]: I1124 12:04:29.110406 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tlklg"] Nov 24 12:04:29 crc kubenswrapper[4789]: E1124 12:04:29.111315 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ed5071c-3dfe-4611-8663-c28122049a8a" containerName="extract-content" Nov 24 12:04:29 crc kubenswrapper[4789]: I1124 12:04:29.111390 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ed5071c-3dfe-4611-8663-c28122049a8a" containerName="extract-content" Nov 24 12:04:29 crc kubenswrapper[4789]: E1124 12:04:29.111478 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ed5071c-3dfe-4611-8663-c28122049a8a" containerName="registry-server" Nov 24 12:04:29 crc kubenswrapper[4789]: I1124 12:04:29.111587 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ed5071c-3dfe-4611-8663-c28122049a8a" containerName="registry-server" Nov 24 12:04:29 crc kubenswrapper[4789]: E1124 12:04:29.111662 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ed5071c-3dfe-4611-8663-c28122049a8a" containerName="extract-utilities" Nov 24 12:04:29 crc kubenswrapper[4789]: I1124 12:04:29.111717 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ed5071c-3dfe-4611-8663-c28122049a8a" containerName="extract-utilities" Nov 24 12:04:29 crc kubenswrapper[4789]: I1124 12:04:29.111952 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ed5071c-3dfe-4611-8663-c28122049a8a" containerName="registry-server" Nov 24 12:04:29 crc kubenswrapper[4789]: I1124 12:04:29.113234 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tlklg" Nov 24 12:04:29 crc kubenswrapper[4789]: I1124 12:04:29.125158 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tlklg"] Nov 24 12:04:29 crc kubenswrapper[4789]: I1124 12:04:29.266282 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mphzt\" (UniqueName: \"kubernetes.io/projected/99018566-a08c-497b-b0cf-85eb7a48c3e3-kube-api-access-mphzt\") pod \"redhat-operators-tlklg\" (UID: \"99018566-a08c-497b-b0cf-85eb7a48c3e3\") " pod="openshift-marketplace/redhat-operators-tlklg" Nov 24 12:04:29 crc kubenswrapper[4789]: I1124 12:04:29.266335 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99018566-a08c-497b-b0cf-85eb7a48c3e3-catalog-content\") pod \"redhat-operators-tlklg\" (UID: \"99018566-a08c-497b-b0cf-85eb7a48c3e3\") " pod="openshift-marketplace/redhat-operators-tlklg" Nov 24 12:04:29 crc kubenswrapper[4789]: I1124 12:04:29.266359 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99018566-a08c-497b-b0cf-85eb7a48c3e3-utilities\") pod \"redhat-operators-tlklg\" (UID: \"99018566-a08c-497b-b0cf-85eb7a48c3e3\") " pod="openshift-marketplace/redhat-operators-tlklg" Nov 24 12:04:29 crc kubenswrapper[4789]: I1124 12:04:29.368552 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mphzt\" (UniqueName: \"kubernetes.io/projected/99018566-a08c-497b-b0cf-85eb7a48c3e3-kube-api-access-mphzt\") pod \"redhat-operators-tlklg\" (UID: \"99018566-a08c-497b-b0cf-85eb7a48c3e3\") " pod="openshift-marketplace/redhat-operators-tlklg" Nov 24 12:04:29 crc kubenswrapper[4789]: I1124 12:04:29.368604 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99018566-a08c-497b-b0cf-85eb7a48c3e3-catalog-content\") pod \"redhat-operators-tlklg\" (UID: \"99018566-a08c-497b-b0cf-85eb7a48c3e3\") " pod="openshift-marketplace/redhat-operators-tlklg" Nov 24 12:04:29 crc kubenswrapper[4789]: I1124 12:04:29.368631 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99018566-a08c-497b-b0cf-85eb7a48c3e3-utilities\") pod \"redhat-operators-tlklg\" (UID: \"99018566-a08c-497b-b0cf-85eb7a48c3e3\") " pod="openshift-marketplace/redhat-operators-tlklg" Nov 24 12:04:29 crc kubenswrapper[4789]: I1124 12:04:29.369193 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99018566-a08c-497b-b0cf-85eb7a48c3e3-utilities\") pod \"redhat-operators-tlklg\" (UID: \"99018566-a08c-497b-b0cf-85eb7a48c3e3\") " pod="openshift-marketplace/redhat-operators-tlklg" Nov 24 12:04:29 crc kubenswrapper[4789]: I1124 12:04:29.369296 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99018566-a08c-497b-b0cf-85eb7a48c3e3-catalog-content\") pod \"redhat-operators-tlklg\" (UID: \"99018566-a08c-497b-b0cf-85eb7a48c3e3\") " pod="openshift-marketplace/redhat-operators-tlklg" Nov 24 12:04:29 crc kubenswrapper[4789]: I1124 12:04:29.386228 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mphzt\" (UniqueName: \"kubernetes.io/projected/99018566-a08c-497b-b0cf-85eb7a48c3e3-kube-api-access-mphzt\") pod \"redhat-operators-tlklg\" (UID: \"99018566-a08c-497b-b0cf-85eb7a48c3e3\") " pod="openshift-marketplace/redhat-operators-tlklg" Nov 24 12:04:29 crc kubenswrapper[4789]: I1124 12:04:29.432915 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tlklg" Nov 24 12:04:29 crc kubenswrapper[4789]: I1124 12:04:29.985103 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tlklg"] Nov 24 12:04:30 crc kubenswrapper[4789]: I1124 12:04:30.895383 4789 generic.go:334] "Generic (PLEG): container finished" podID="99018566-a08c-497b-b0cf-85eb7a48c3e3" containerID="e4e8a291180cf3ac39ea1319e63d699c61951e36d27e20bab8228adb2d104a30" exitCode=0 Nov 24 12:04:30 crc kubenswrapper[4789]: I1124 12:04:30.895588 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tlklg" event={"ID":"99018566-a08c-497b-b0cf-85eb7a48c3e3","Type":"ContainerDied","Data":"e4e8a291180cf3ac39ea1319e63d699c61951e36d27e20bab8228adb2d104a30"} Nov 24 12:04:30 crc kubenswrapper[4789]: I1124 12:04:30.895956 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tlklg" event={"ID":"99018566-a08c-497b-b0cf-85eb7a48c3e3","Type":"ContainerStarted","Data":"b15874dc509f93cf31ecd93878b8264711e2a0791c45ba6755532d0d229c59fd"} Nov 24 12:04:31 crc kubenswrapper[4789]: I1124 12:04:31.906175 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tlklg" event={"ID":"99018566-a08c-497b-b0cf-85eb7a48c3e3","Type":"ContainerStarted","Data":"80a35319319925c0a0cd0c017893bcbb116a89250b0aaff4ab105dc6a2b69c52"} Nov 24 12:04:35 crc kubenswrapper[4789]: I1124 12:04:35.952608 4789 generic.go:334] "Generic (PLEG): container finished" podID="99018566-a08c-497b-b0cf-85eb7a48c3e3" containerID="80a35319319925c0a0cd0c017893bcbb116a89250b0aaff4ab105dc6a2b69c52" exitCode=0 Nov 24 12:04:35 crc kubenswrapper[4789]: I1124 12:04:35.952750 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tlklg" event={"ID":"99018566-a08c-497b-b0cf-85eb7a48c3e3","Type":"ContainerDied","Data":"80a35319319925c0a0cd0c017893bcbb116a89250b0aaff4ab105dc6a2b69c52"} Nov 24 12:04:37 crc kubenswrapper[4789]: I1124 12:04:37.977442 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tlklg" event={"ID":"99018566-a08c-497b-b0cf-85eb7a48c3e3","Type":"ContainerStarted","Data":"af5d7625828f3b9f01cba29facad4ac4892d4bb362583b69a2d15e80854b2d62"} Nov 24 12:04:38 crc kubenswrapper[4789]: I1124 12:04:38.003577 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tlklg" podStartSLOduration=2.95180766 podStartE2EDuration="9.003562392s" podCreationTimestamp="2025-11-24 12:04:29 +0000 UTC" firstStartedPulling="2025-11-24 12:04:30.898685811 +0000 UTC m=+2053.481157190" lastFinishedPulling="2025-11-24 12:04:36.950440493 +0000 UTC m=+2059.532911922" observedRunningTime="2025-11-24 12:04:38.000582199 +0000 UTC m=+2060.583053618" watchObservedRunningTime="2025-11-24 12:04:38.003562392 +0000 UTC m=+2060.586033771" Nov 24 12:04:39 crc kubenswrapper[4789]: I1124 12:04:39.434174 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tlklg" Nov 24 12:04:39 crc kubenswrapper[4789]: I1124 12:04:39.435721 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tlklg" Nov 24 12:04:40 crc kubenswrapper[4789]: I1124 12:04:40.486792 4789 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tlklg" podUID="99018566-a08c-497b-b0cf-85eb7a48c3e3" containerName="registry-server" probeResult="failure" output=< Nov 24 12:04:40 crc kubenswrapper[4789]: timeout: failed to connect service ":50051" within 1s Nov 24 12:04:40 crc kubenswrapper[4789]: > Nov 24 12:04:49 crc kubenswrapper[4789]: I1124 12:04:49.482020 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tlklg" Nov 24 12:04:49 crc kubenswrapper[4789]: I1124 12:04:49.545811 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tlklg" Nov 24 12:04:49 crc kubenswrapper[4789]: I1124 12:04:49.718226 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tlklg"] Nov 24 12:04:51 crc kubenswrapper[4789]: I1124 12:04:51.084782 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tlklg" podUID="99018566-a08c-497b-b0cf-85eb7a48c3e3" containerName="registry-server" containerID="cri-o://af5d7625828f3b9f01cba29facad4ac4892d4bb362583b69a2d15e80854b2d62" gracePeriod=2 Nov 24 12:04:51 crc kubenswrapper[4789]: I1124 12:04:51.526962 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tlklg" Nov 24 12:04:51 crc kubenswrapper[4789]: I1124 12:04:51.688173 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99018566-a08c-497b-b0cf-85eb7a48c3e3-utilities\") pod \"99018566-a08c-497b-b0cf-85eb7a48c3e3\" (UID: \"99018566-a08c-497b-b0cf-85eb7a48c3e3\") " Nov 24 12:04:51 crc kubenswrapper[4789]: I1124 12:04:51.688313 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mphzt\" (UniqueName: \"kubernetes.io/projected/99018566-a08c-497b-b0cf-85eb7a48c3e3-kube-api-access-mphzt\") pod \"99018566-a08c-497b-b0cf-85eb7a48c3e3\" (UID: \"99018566-a08c-497b-b0cf-85eb7a48c3e3\") " Nov 24 12:04:51 crc kubenswrapper[4789]: I1124 12:04:51.688396 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99018566-a08c-497b-b0cf-85eb7a48c3e3-catalog-content\") pod \"99018566-a08c-497b-b0cf-85eb7a48c3e3\" (UID: \"99018566-a08c-497b-b0cf-85eb7a48c3e3\") " Nov 24 12:04:51 crc kubenswrapper[4789]: I1124 12:04:51.689007 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99018566-a08c-497b-b0cf-85eb7a48c3e3-utilities" (OuterVolumeSpecName: "utilities") pod "99018566-a08c-497b-b0cf-85eb7a48c3e3" (UID: "99018566-a08c-497b-b0cf-85eb7a48c3e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:04:51 crc kubenswrapper[4789]: I1124 12:04:51.693590 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99018566-a08c-497b-b0cf-85eb7a48c3e3-kube-api-access-mphzt" (OuterVolumeSpecName: "kube-api-access-mphzt") pod "99018566-a08c-497b-b0cf-85eb7a48c3e3" (UID: "99018566-a08c-497b-b0cf-85eb7a48c3e3"). InnerVolumeSpecName "kube-api-access-mphzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:04:51 crc kubenswrapper[4789]: I1124 12:04:51.772523 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99018566-a08c-497b-b0cf-85eb7a48c3e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "99018566-a08c-497b-b0cf-85eb7a48c3e3" (UID: "99018566-a08c-497b-b0cf-85eb7a48c3e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:04:51 crc kubenswrapper[4789]: I1124 12:04:51.790862 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mphzt\" (UniqueName: \"kubernetes.io/projected/99018566-a08c-497b-b0cf-85eb7a48c3e3-kube-api-access-mphzt\") on node \"crc\" DevicePath \"\"" Nov 24 12:04:51 crc kubenswrapper[4789]: I1124 12:04:51.790899 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99018566-a08c-497b-b0cf-85eb7a48c3e3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:04:51 crc kubenswrapper[4789]: I1124 12:04:51.790912 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99018566-a08c-497b-b0cf-85eb7a48c3e3-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:04:52 crc kubenswrapper[4789]: I1124 12:04:52.105230 4789 generic.go:334] "Generic (PLEG): container finished" podID="99018566-a08c-497b-b0cf-85eb7a48c3e3" containerID="af5d7625828f3b9f01cba29facad4ac4892d4bb362583b69a2d15e80854b2d62" exitCode=0 Nov 24 12:04:52 crc kubenswrapper[4789]: I1124 12:04:52.105279 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tlklg" event={"ID":"99018566-a08c-497b-b0cf-85eb7a48c3e3","Type":"ContainerDied","Data":"af5d7625828f3b9f01cba29facad4ac4892d4bb362583b69a2d15e80854b2d62"} Nov 24 12:04:52 crc kubenswrapper[4789]: I1124 12:04:52.105315 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tlklg" event={"ID":"99018566-a08c-497b-b0cf-85eb7a48c3e3","Type":"ContainerDied","Data":"b15874dc509f93cf31ecd93878b8264711e2a0791c45ba6755532d0d229c59fd"} Nov 24 12:04:52 crc kubenswrapper[4789]: I1124 12:04:52.105339 4789 scope.go:117] "RemoveContainer" containerID="af5d7625828f3b9f01cba29facad4ac4892d4bb362583b69a2d15e80854b2d62" Nov 24 12:04:52 crc kubenswrapper[4789]: I1124 12:04:52.105535 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tlklg" Nov 24 12:04:52 crc kubenswrapper[4789]: I1124 12:04:52.154689 4789 scope.go:117] "RemoveContainer" containerID="80a35319319925c0a0cd0c017893bcbb116a89250b0aaff4ab105dc6a2b69c52" Nov 24 12:04:52 crc kubenswrapper[4789]: I1124 12:04:52.155259 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tlklg"] Nov 24 12:04:52 crc kubenswrapper[4789]: I1124 12:04:52.162553 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tlklg"] Nov 24 12:04:52 crc kubenswrapper[4789]: I1124 12:04:52.182490 4789 scope.go:117] "RemoveContainer" containerID="e4e8a291180cf3ac39ea1319e63d699c61951e36d27e20bab8228adb2d104a30" Nov 24 12:04:52 crc kubenswrapper[4789]: I1124 12:04:52.192636 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99018566-a08c-497b-b0cf-85eb7a48c3e3" path="/var/lib/kubelet/pods/99018566-a08c-497b-b0cf-85eb7a48c3e3/volumes" Nov 24 12:04:52 crc kubenswrapper[4789]: I1124 12:04:52.220907 4789 scope.go:117] "RemoveContainer" containerID="af5d7625828f3b9f01cba29facad4ac4892d4bb362583b69a2d15e80854b2d62" Nov 24 12:04:52 crc kubenswrapper[4789]: E1124 12:04:52.221547 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af5d7625828f3b9f01cba29facad4ac4892d4bb362583b69a2d15e80854b2d62\": container with ID starting with af5d7625828f3b9f01cba29facad4ac4892d4bb362583b69a2d15e80854b2d62 not found: ID does not exist" containerID="af5d7625828f3b9f01cba29facad4ac4892d4bb362583b69a2d15e80854b2d62" Nov 24 12:04:52 crc kubenswrapper[4789]: I1124 12:04:52.221584 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af5d7625828f3b9f01cba29facad4ac4892d4bb362583b69a2d15e80854b2d62"} err="failed to get container status \"af5d7625828f3b9f01cba29facad4ac4892d4bb362583b69a2d15e80854b2d62\": rpc error: code = NotFound desc = could not find container \"af5d7625828f3b9f01cba29facad4ac4892d4bb362583b69a2d15e80854b2d62\": container with ID starting with af5d7625828f3b9f01cba29facad4ac4892d4bb362583b69a2d15e80854b2d62 not found: ID does not exist" Nov 24 12:04:52 crc kubenswrapper[4789]: I1124 12:04:52.221608 4789 scope.go:117] "RemoveContainer" containerID="80a35319319925c0a0cd0c017893bcbb116a89250b0aaff4ab105dc6a2b69c52" Nov 24 12:04:52 crc kubenswrapper[4789]: E1124 12:04:52.222008 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80a35319319925c0a0cd0c017893bcbb116a89250b0aaff4ab105dc6a2b69c52\": container with ID starting with 80a35319319925c0a0cd0c017893bcbb116a89250b0aaff4ab105dc6a2b69c52 not found: ID does not exist" containerID="80a35319319925c0a0cd0c017893bcbb116a89250b0aaff4ab105dc6a2b69c52" Nov 24 12:04:52 crc kubenswrapper[4789]: I1124 12:04:52.222104 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80a35319319925c0a0cd0c017893bcbb116a89250b0aaff4ab105dc6a2b69c52"} err="failed to get container status \"80a35319319925c0a0cd0c017893bcbb116a89250b0aaff4ab105dc6a2b69c52\": rpc error: code = NotFound desc = could not find container \"80a35319319925c0a0cd0c017893bcbb116a89250b0aaff4ab105dc6a2b69c52\": container with ID starting with 80a35319319925c0a0cd0c017893bcbb116a89250b0aaff4ab105dc6a2b69c52 not found: ID does not exist" Nov 24 12:04:52 crc kubenswrapper[4789]: I1124 12:04:52.222162 4789 scope.go:117] "RemoveContainer" containerID="e4e8a291180cf3ac39ea1319e63d699c61951e36d27e20bab8228adb2d104a30" Nov 24 12:04:52 crc kubenswrapper[4789]: E1124 12:04:52.222677 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4e8a291180cf3ac39ea1319e63d699c61951e36d27e20bab8228adb2d104a30\": container with ID starting with e4e8a291180cf3ac39ea1319e63d699c61951e36d27e20bab8228adb2d104a30 not found: ID does not exist" containerID="e4e8a291180cf3ac39ea1319e63d699c61951e36d27e20bab8228adb2d104a30" Nov 24 12:04:52 crc kubenswrapper[4789]: I1124 12:04:52.222703 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4e8a291180cf3ac39ea1319e63d699c61951e36d27e20bab8228adb2d104a30"} err="failed to get container status \"e4e8a291180cf3ac39ea1319e63d699c61951e36d27e20bab8228adb2d104a30\": rpc error: code = NotFound desc = could not find container \"e4e8a291180cf3ac39ea1319e63d699c61951e36d27e20bab8228adb2d104a30\": container with ID starting with e4e8a291180cf3ac39ea1319e63d699c61951e36d27e20bab8228adb2d104a30 not found: ID does not exist" Nov 24 12:05:20 crc kubenswrapper[4789]: I1124 12:05:20.162735 4789 patch_prober.go:28] interesting pod/machine-config-daemon-9czvn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:05:20 crc kubenswrapper[4789]: I1124 12:05:20.163425 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:05:33 crc kubenswrapper[4789]: I1124 12:05:33.403586 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bk76d/must-gather-nc82q"] Nov 24 12:05:33 crc kubenswrapper[4789]: E1124 12:05:33.404445 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99018566-a08c-497b-b0cf-85eb7a48c3e3" containerName="registry-server" Nov 24 12:05:33 crc kubenswrapper[4789]: I1124 12:05:33.404478 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="99018566-a08c-497b-b0cf-85eb7a48c3e3" containerName="registry-server" Nov 24 12:05:33 crc kubenswrapper[4789]: E1124 12:05:33.404503 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99018566-a08c-497b-b0cf-85eb7a48c3e3" containerName="extract-content" Nov 24 12:05:33 crc kubenswrapper[4789]: I1124 12:05:33.404512 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="99018566-a08c-497b-b0cf-85eb7a48c3e3" containerName="extract-content" Nov 24 12:05:33 crc kubenswrapper[4789]: E1124 12:05:33.404533 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99018566-a08c-497b-b0cf-85eb7a48c3e3" containerName="extract-utilities" Nov 24 12:05:33 crc kubenswrapper[4789]: I1124 12:05:33.404543 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="99018566-a08c-497b-b0cf-85eb7a48c3e3" containerName="extract-utilities" Nov 24 12:05:33 crc kubenswrapper[4789]: I1124 12:05:33.404735 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="99018566-a08c-497b-b0cf-85eb7a48c3e3" containerName="registry-server" Nov 24 12:05:33 crc kubenswrapper[4789]: I1124 12:05:33.405894 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bk76d/must-gather-nc82q" Nov 24 12:05:33 crc kubenswrapper[4789]: I1124 12:05:33.416573 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-bk76d"/"openshift-service-ca.crt" Nov 24 12:05:33 crc kubenswrapper[4789]: I1124 12:05:33.417390 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-bk76d"/"kube-root-ca.crt" Nov 24 12:05:33 crc kubenswrapper[4789]: I1124 12:05:33.429082 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-bk76d/must-gather-nc82q"] Nov 24 12:05:33 crc kubenswrapper[4789]: I1124 12:05:33.472230 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a00101c3-23f4-4180-b2f3-e601ba7afb4f-must-gather-output\") pod \"must-gather-nc82q\" (UID: \"a00101c3-23f4-4180-b2f3-e601ba7afb4f\") " pod="openshift-must-gather-bk76d/must-gather-nc82q" Nov 24 12:05:33 crc kubenswrapper[4789]: I1124 12:05:33.472401 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2qzd\" (UniqueName: \"kubernetes.io/projected/a00101c3-23f4-4180-b2f3-e601ba7afb4f-kube-api-access-p2qzd\") pod \"must-gather-nc82q\" (UID: \"a00101c3-23f4-4180-b2f3-e601ba7afb4f\") " pod="openshift-must-gather-bk76d/must-gather-nc82q" Nov 24 12:05:33 crc kubenswrapper[4789]: I1124 12:05:33.573849 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a00101c3-23f4-4180-b2f3-e601ba7afb4f-must-gather-output\") pod \"must-gather-nc82q\" (UID: \"a00101c3-23f4-4180-b2f3-e601ba7afb4f\") " pod="openshift-must-gather-bk76d/must-gather-nc82q" Nov 24 12:05:33 crc kubenswrapper[4789]: I1124 12:05:33.573966 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2qzd\" (UniqueName: \"kubernetes.io/projected/a00101c3-23f4-4180-b2f3-e601ba7afb4f-kube-api-access-p2qzd\") pod \"must-gather-nc82q\" (UID: \"a00101c3-23f4-4180-b2f3-e601ba7afb4f\") " pod="openshift-must-gather-bk76d/must-gather-nc82q" Nov 24 12:05:33 crc kubenswrapper[4789]: I1124 12:05:33.574410 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a00101c3-23f4-4180-b2f3-e601ba7afb4f-must-gather-output\") pod \"must-gather-nc82q\" (UID: \"a00101c3-23f4-4180-b2f3-e601ba7afb4f\") " pod="openshift-must-gather-bk76d/must-gather-nc82q" Nov 24 12:05:33 crc kubenswrapper[4789]: I1124 12:05:33.593519 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2qzd\" (UniqueName: \"kubernetes.io/projected/a00101c3-23f4-4180-b2f3-e601ba7afb4f-kube-api-access-p2qzd\") pod \"must-gather-nc82q\" (UID: \"a00101c3-23f4-4180-b2f3-e601ba7afb4f\") " pod="openshift-must-gather-bk76d/must-gather-nc82q" Nov 24 12:05:33 crc kubenswrapper[4789]: I1124 12:05:33.724185 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bk76d/must-gather-nc82q" Nov 24 12:05:34 crc kubenswrapper[4789]: I1124 12:05:34.213787 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-bk76d/must-gather-nc82q"] Nov 24 12:05:34 crc kubenswrapper[4789]: I1124 12:05:34.478752 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bk76d/must-gather-nc82q" event={"ID":"a00101c3-23f4-4180-b2f3-e601ba7afb4f","Type":"ContainerStarted","Data":"4f6f61c3155e1e891654e596cf8120a6128c3f405d59b16f911eed9820293a41"} Nov 24 12:05:38 crc kubenswrapper[4789]: I1124 12:05:38.521077 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bk76d/must-gather-nc82q" event={"ID":"a00101c3-23f4-4180-b2f3-e601ba7afb4f","Type":"ContainerStarted","Data":"e42811f688385c96b02a2e92983a5a69b85338a27e7eb0a7e6da8c77daaf14e0"} Nov 24 12:05:39 crc kubenswrapper[4789]: I1124 12:05:39.533451 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bk76d/must-gather-nc82q" event={"ID":"a00101c3-23f4-4180-b2f3-e601ba7afb4f","Type":"ContainerStarted","Data":"34732292cfb98294bf6c7330846136dabc843dc22c4670af4bd13876aaeadb0f"} Nov 24 12:05:41 crc kubenswrapper[4789]: E1124 12:05:41.183851 4789 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.184:54022->38.102.83.184:35431: write tcp 38.102.83.184:54022->38.102.83.184:35431: write: broken pipe Nov 24 12:05:42 crc kubenswrapper[4789]: I1124 12:05:42.106904 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bk76d/must-gather-nc82q" podStartSLOduration=5.050348873 podStartE2EDuration="9.106883241s" podCreationTimestamp="2025-11-24 12:05:33 +0000 UTC" firstStartedPulling="2025-11-24 12:05:34.194467071 +0000 UTC m=+2116.776938450" lastFinishedPulling="2025-11-24 12:05:38.251001439 +0000 UTC m=+2120.833472818" observedRunningTime="2025-11-24 12:05:39.555516693 +0000 UTC m=+2122.137988082" watchObservedRunningTime="2025-11-24 12:05:42.106883241 +0000 UTC m=+2124.689354630" Nov 24 12:05:42 crc kubenswrapper[4789]: I1124 12:05:42.111336 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bk76d/crc-debug-5742n"] Nov 24 12:05:42 crc kubenswrapper[4789]: I1124 12:05:42.112713 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bk76d/crc-debug-5742n" Nov 24 12:05:42 crc kubenswrapper[4789]: I1124 12:05:42.115307 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-bk76d"/"default-dockercfg-qhmdc" Nov 24 12:05:42 crc kubenswrapper[4789]: I1124 12:05:42.224336 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a922bb6f-3b0b-4d20-b046-442ff2cd693e-host\") pod \"crc-debug-5742n\" (UID: \"a922bb6f-3b0b-4d20-b046-442ff2cd693e\") " pod="openshift-must-gather-bk76d/crc-debug-5742n" Nov 24 12:05:42 crc kubenswrapper[4789]: I1124 12:05:42.224969 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snlrq\" (UniqueName: \"kubernetes.io/projected/a922bb6f-3b0b-4d20-b046-442ff2cd693e-kube-api-access-snlrq\") pod \"crc-debug-5742n\" (UID: \"a922bb6f-3b0b-4d20-b046-442ff2cd693e\") " pod="openshift-must-gather-bk76d/crc-debug-5742n" Nov 24 12:05:42 crc kubenswrapper[4789]: I1124 12:05:42.326285 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a922bb6f-3b0b-4d20-b046-442ff2cd693e-host\") pod \"crc-debug-5742n\" (UID: \"a922bb6f-3b0b-4d20-b046-442ff2cd693e\") " pod="openshift-must-gather-bk76d/crc-debug-5742n" Nov 24 12:05:42 crc kubenswrapper[4789]: I1124 12:05:42.326655 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snlrq\" (UniqueName: \"kubernetes.io/projected/a922bb6f-3b0b-4d20-b046-442ff2cd693e-kube-api-access-snlrq\") pod \"crc-debug-5742n\" (UID: \"a922bb6f-3b0b-4d20-b046-442ff2cd693e\") " pod="openshift-must-gather-bk76d/crc-debug-5742n" Nov 24 12:05:42 crc kubenswrapper[4789]: I1124 12:05:42.328059 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a922bb6f-3b0b-4d20-b046-442ff2cd693e-host\") pod \"crc-debug-5742n\" (UID: \"a922bb6f-3b0b-4d20-b046-442ff2cd693e\") " pod="openshift-must-gather-bk76d/crc-debug-5742n" Nov 24 12:05:42 crc kubenswrapper[4789]: I1124 12:05:42.356254 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snlrq\" (UniqueName: \"kubernetes.io/projected/a922bb6f-3b0b-4d20-b046-442ff2cd693e-kube-api-access-snlrq\") pod \"crc-debug-5742n\" (UID: \"a922bb6f-3b0b-4d20-b046-442ff2cd693e\") " pod="openshift-must-gather-bk76d/crc-debug-5742n" Nov 24 12:05:42 crc kubenswrapper[4789]: I1124 12:05:42.432863 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bk76d/crc-debug-5742n" Nov 24 12:05:42 crc kubenswrapper[4789]: W1124 12:05:42.501211 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda922bb6f_3b0b_4d20_b046_442ff2cd693e.slice/crio-540988b33fe647a0784a3ddf9ef133567cb2e2c2efc17e56cba4e9d06b328695 WatchSource:0}: Error finding container 540988b33fe647a0784a3ddf9ef133567cb2e2c2efc17e56cba4e9d06b328695: Status 404 returned error can't find the container with id 540988b33fe647a0784a3ddf9ef133567cb2e2c2efc17e56cba4e9d06b328695 Nov 24 12:05:42 crc kubenswrapper[4789]: I1124 12:05:42.565182 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bk76d/crc-debug-5742n" event={"ID":"a922bb6f-3b0b-4d20-b046-442ff2cd693e","Type":"ContainerStarted","Data":"540988b33fe647a0784a3ddf9ef133567cb2e2c2efc17e56cba4e9d06b328695"} Nov 24 12:05:50 crc kubenswrapper[4789]: I1124 12:05:50.163788 4789 patch_prober.go:28] interesting pod/machine-config-daemon-9czvn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:05:50 crc kubenswrapper[4789]: I1124 12:05:50.164324 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:05:54 crc kubenswrapper[4789]: I1124 12:05:54.707011 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bk76d/crc-debug-5742n" event={"ID":"a922bb6f-3b0b-4d20-b046-442ff2cd693e","Type":"ContainerStarted","Data":"389d969339073445e4e34f28fa362d200f598199162672006e0856648172130e"} Nov 24 12:05:54 crc kubenswrapper[4789]: I1124 12:05:54.730495 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bk76d/crc-debug-5742n" podStartSLOduration=1.315465735 podStartE2EDuration="12.730476827s" podCreationTimestamp="2025-11-24 12:05:42 +0000 UTC" firstStartedPulling="2025-11-24 12:05:42.510210284 +0000 UTC m=+2125.092681663" lastFinishedPulling="2025-11-24 12:05:53.925221376 +0000 UTC m=+2136.507692755" observedRunningTime="2025-11-24 12:05:54.72241642 +0000 UTC m=+2137.304887809" watchObservedRunningTime="2025-11-24 12:05:54.730476827 +0000 UTC m=+2137.312948206" Nov 24 12:06:19 crc kubenswrapper[4789]: I1124 12:06:19.920098 4789 generic.go:334] "Generic (PLEG): container finished" podID="a922bb6f-3b0b-4d20-b046-442ff2cd693e" containerID="389d969339073445e4e34f28fa362d200f598199162672006e0856648172130e" exitCode=0 Nov 24 12:06:19 crc kubenswrapper[4789]: I1124 12:06:19.920213 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bk76d/crc-debug-5742n" event={"ID":"a922bb6f-3b0b-4d20-b046-442ff2cd693e","Type":"ContainerDied","Data":"389d969339073445e4e34f28fa362d200f598199162672006e0856648172130e"} Nov 24 12:06:20 crc kubenswrapper[4789]: I1124 12:06:20.162257 4789 patch_prober.go:28] interesting pod/machine-config-daemon-9czvn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:06:20 crc kubenswrapper[4789]: I1124 12:06:20.162303 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:06:20 crc kubenswrapper[4789]: I1124 12:06:20.162343 4789 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" Nov 24 12:06:20 crc kubenswrapper[4789]: I1124 12:06:20.163039 4789 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e0548ff4b57302caa6b7a362f06382ae8c3563988da3b37011e15cb6b4702acd"} pod="openshift-machine-config-operator/machine-config-daemon-9czvn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:06:20 crc kubenswrapper[4789]: I1124 12:06:20.163087 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" containerID="cri-o://e0548ff4b57302caa6b7a362f06382ae8c3563988da3b37011e15cb6b4702acd" gracePeriod=600 Nov 24 12:06:20 crc kubenswrapper[4789]: E1124 12:06:20.285425 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 12:06:20 crc kubenswrapper[4789]: I1124 12:06:20.931846 4789 generic.go:334] "Generic (PLEG): container finished" podID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerID="e0548ff4b57302caa6b7a362f06382ae8c3563988da3b37011e15cb6b4702acd" exitCode=0 Nov 24 12:06:20 crc kubenswrapper[4789]: I1124 12:06:20.931932 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" event={"ID":"30c4a832-f0e4-481b-a474-3ecea86049f6","Type":"ContainerDied","Data":"e0548ff4b57302caa6b7a362f06382ae8c3563988da3b37011e15cb6b4702acd"} Nov 24 12:06:20 crc kubenswrapper[4789]: I1124 12:06:20.932218 4789 scope.go:117] "RemoveContainer" containerID="838d1706add581c37ff431ed504768d990cd1000bb98f6e1b77849ff344d84b2" Nov 24 12:06:20 crc kubenswrapper[4789]: I1124 12:06:20.932999 4789 scope.go:117] "RemoveContainer" containerID="e0548ff4b57302caa6b7a362f06382ae8c3563988da3b37011e15cb6b4702acd" Nov 24 12:06:20 crc kubenswrapper[4789]: E1124 12:06:20.933361 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 12:06:21 crc kubenswrapper[4789]: I1124 12:06:21.049742 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bk76d/crc-debug-5742n" Nov 24 12:06:21 crc kubenswrapper[4789]: I1124 12:06:21.099121 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bk76d/crc-debug-5742n"] Nov 24 12:06:21 crc kubenswrapper[4789]: I1124 12:06:21.106169 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bk76d/crc-debug-5742n"] Nov 24 12:06:21 crc kubenswrapper[4789]: I1124 12:06:21.216366 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snlrq\" (UniqueName: \"kubernetes.io/projected/a922bb6f-3b0b-4d20-b046-442ff2cd693e-kube-api-access-snlrq\") pod \"a922bb6f-3b0b-4d20-b046-442ff2cd693e\" (UID: \"a922bb6f-3b0b-4d20-b046-442ff2cd693e\") " Nov 24 12:06:21 crc kubenswrapper[4789]: I1124 12:06:21.216599 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a922bb6f-3b0b-4d20-b046-442ff2cd693e-host\") pod \"a922bb6f-3b0b-4d20-b046-442ff2cd693e\" (UID: \"a922bb6f-3b0b-4d20-b046-442ff2cd693e\") " Nov 24 12:06:21 crc kubenswrapper[4789]: I1124 12:06:21.217033 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a922bb6f-3b0b-4d20-b046-442ff2cd693e-host" (OuterVolumeSpecName: "host") pod "a922bb6f-3b0b-4d20-b046-442ff2cd693e" (UID: "a922bb6f-3b0b-4d20-b046-442ff2cd693e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:06:21 crc kubenswrapper[4789]: I1124 12:06:21.231636 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a922bb6f-3b0b-4d20-b046-442ff2cd693e-kube-api-access-snlrq" (OuterVolumeSpecName: "kube-api-access-snlrq") pod "a922bb6f-3b0b-4d20-b046-442ff2cd693e" (UID: "a922bb6f-3b0b-4d20-b046-442ff2cd693e"). InnerVolumeSpecName "kube-api-access-snlrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:06:21 crc kubenswrapper[4789]: I1124 12:06:21.318151 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snlrq\" (UniqueName: \"kubernetes.io/projected/a922bb6f-3b0b-4d20-b046-442ff2cd693e-kube-api-access-snlrq\") on node \"crc\" DevicePath \"\"" Nov 24 12:06:21 crc kubenswrapper[4789]: I1124 12:06:21.318184 4789 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a922bb6f-3b0b-4d20-b046-442ff2cd693e-host\") on node \"crc\" DevicePath \"\"" Nov 24 12:06:21 crc kubenswrapper[4789]: I1124 12:06:21.941563 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="540988b33fe647a0784a3ddf9ef133567cb2e2c2efc17e56cba4e9d06b328695" Nov 24 12:06:21 crc kubenswrapper[4789]: I1124 12:06:21.941562 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bk76d/crc-debug-5742n" Nov 24 12:06:22 crc kubenswrapper[4789]: I1124 12:06:22.187077 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a922bb6f-3b0b-4d20-b046-442ff2cd693e" path="/var/lib/kubelet/pods/a922bb6f-3b0b-4d20-b046-442ff2cd693e/volumes" Nov 24 12:06:22 crc kubenswrapper[4789]: I1124 12:06:22.475486 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bk76d/crc-debug-z78nz"] Nov 24 12:06:22 crc kubenswrapper[4789]: E1124 12:06:22.476435 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a922bb6f-3b0b-4d20-b046-442ff2cd693e" containerName="container-00" Nov 24 12:06:22 crc kubenswrapper[4789]: I1124 12:06:22.476562 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="a922bb6f-3b0b-4d20-b046-442ff2cd693e" containerName="container-00" Nov 24 12:06:22 crc kubenswrapper[4789]: I1124 12:06:22.476856 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="a922bb6f-3b0b-4d20-b046-442ff2cd693e" containerName="container-00" Nov 24 12:06:22 crc kubenswrapper[4789]: I1124 12:06:22.477685 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bk76d/crc-debug-z78nz" Nov 24 12:06:22 crc kubenswrapper[4789]: I1124 12:06:22.480182 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-bk76d"/"default-dockercfg-qhmdc" Nov 24 12:06:22 crc kubenswrapper[4789]: I1124 12:06:22.539248 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncnqp\" (UniqueName: \"kubernetes.io/projected/80d76766-a4a8-456f-89aa-7bbb81c8ff9c-kube-api-access-ncnqp\") pod \"crc-debug-z78nz\" (UID: \"80d76766-a4a8-456f-89aa-7bbb81c8ff9c\") " pod="openshift-must-gather-bk76d/crc-debug-z78nz" Nov 24 12:06:22 crc kubenswrapper[4789]: I1124 12:06:22.539549 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/80d76766-a4a8-456f-89aa-7bbb81c8ff9c-host\") pod \"crc-debug-z78nz\" (UID: \"80d76766-a4a8-456f-89aa-7bbb81c8ff9c\") " pod="openshift-must-gather-bk76d/crc-debug-z78nz" Nov 24 12:06:22 crc kubenswrapper[4789]: I1124 12:06:22.640556 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncnqp\" (UniqueName: \"kubernetes.io/projected/80d76766-a4a8-456f-89aa-7bbb81c8ff9c-kube-api-access-ncnqp\") pod \"crc-debug-z78nz\" (UID: \"80d76766-a4a8-456f-89aa-7bbb81c8ff9c\") " pod="openshift-must-gather-bk76d/crc-debug-z78nz" Nov 24 12:06:22 crc kubenswrapper[4789]: I1124 12:06:22.640603 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/80d76766-a4a8-456f-89aa-7bbb81c8ff9c-host\") pod \"crc-debug-z78nz\" (UID: \"80d76766-a4a8-456f-89aa-7bbb81c8ff9c\") " pod="openshift-must-gather-bk76d/crc-debug-z78nz" Nov 24 12:06:22 crc kubenswrapper[4789]: I1124 12:06:22.640731 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/80d76766-a4a8-456f-89aa-7bbb81c8ff9c-host\") pod \"crc-debug-z78nz\" (UID: \"80d76766-a4a8-456f-89aa-7bbb81c8ff9c\") " pod="openshift-must-gather-bk76d/crc-debug-z78nz" Nov 24 12:06:22 crc kubenswrapper[4789]: I1124 12:06:22.659699 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncnqp\" (UniqueName: \"kubernetes.io/projected/80d76766-a4a8-456f-89aa-7bbb81c8ff9c-kube-api-access-ncnqp\") pod \"crc-debug-z78nz\" (UID: \"80d76766-a4a8-456f-89aa-7bbb81c8ff9c\") " pod="openshift-must-gather-bk76d/crc-debug-z78nz" Nov 24 12:06:22 crc kubenswrapper[4789]: I1124 12:06:22.794365 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bk76d/crc-debug-z78nz" Nov 24 12:06:22 crc kubenswrapper[4789]: W1124 12:06:22.822606 4789 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80d76766_a4a8_456f_89aa_7bbb81c8ff9c.slice/crio-c71cf7524c7384ff22d41b4f8ed1fca0e2b4ff939f78f86a0edd08248c3b4b39 WatchSource:0}: Error finding container c71cf7524c7384ff22d41b4f8ed1fca0e2b4ff939f78f86a0edd08248c3b4b39: Status 404 returned error can't find the container with id c71cf7524c7384ff22d41b4f8ed1fca0e2b4ff939f78f86a0edd08248c3b4b39 Nov 24 12:06:22 crc kubenswrapper[4789]: I1124 12:06:22.952187 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bk76d/crc-debug-z78nz" event={"ID":"80d76766-a4a8-456f-89aa-7bbb81c8ff9c","Type":"ContainerStarted","Data":"c71cf7524c7384ff22d41b4f8ed1fca0e2b4ff939f78f86a0edd08248c3b4b39"} Nov 24 12:06:23 crc kubenswrapper[4789]: I1124 12:06:23.964194 4789 generic.go:334] "Generic (PLEG): container finished" podID="80d76766-a4a8-456f-89aa-7bbb81c8ff9c" containerID="2253b3e23ffc2dfd14316b17e7e4571ba91c795afdc23c1e097ea9bacf8ee038" exitCode=1 Nov 24 12:06:23 crc kubenswrapper[4789]: I1124 12:06:23.964285 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bk76d/crc-debug-z78nz" event={"ID":"80d76766-a4a8-456f-89aa-7bbb81c8ff9c","Type":"ContainerDied","Data":"2253b3e23ffc2dfd14316b17e7e4571ba91c795afdc23c1e097ea9bacf8ee038"} Nov 24 12:06:24 crc kubenswrapper[4789]: I1124 12:06:24.010958 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bk76d/crc-debug-z78nz"] Nov 24 12:06:24 crc kubenswrapper[4789]: I1124 12:06:24.028286 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bk76d/crc-debug-z78nz"] Nov 24 12:06:25 crc kubenswrapper[4789]: I1124 12:06:25.087314 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bk76d/crc-debug-z78nz" Nov 24 12:06:25 crc kubenswrapper[4789]: I1124 12:06:25.190108 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/80d76766-a4a8-456f-89aa-7bbb81c8ff9c-host\") pod \"80d76766-a4a8-456f-89aa-7bbb81c8ff9c\" (UID: \"80d76766-a4a8-456f-89aa-7bbb81c8ff9c\") " Nov 24 12:06:25 crc kubenswrapper[4789]: I1124 12:06:25.190236 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncnqp\" (UniqueName: \"kubernetes.io/projected/80d76766-a4a8-456f-89aa-7bbb81c8ff9c-kube-api-access-ncnqp\") pod \"80d76766-a4a8-456f-89aa-7bbb81c8ff9c\" (UID: \"80d76766-a4a8-456f-89aa-7bbb81c8ff9c\") " Nov 24 12:06:25 crc kubenswrapper[4789]: I1124 12:06:25.191214 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80d76766-a4a8-456f-89aa-7bbb81c8ff9c-host" (OuterVolumeSpecName: "host") pod "80d76766-a4a8-456f-89aa-7bbb81c8ff9c" (UID: "80d76766-a4a8-456f-89aa-7bbb81c8ff9c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:06:25 crc kubenswrapper[4789]: I1124 12:06:25.206691 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80d76766-a4a8-456f-89aa-7bbb81c8ff9c-kube-api-access-ncnqp" (OuterVolumeSpecName: "kube-api-access-ncnqp") pod "80d76766-a4a8-456f-89aa-7bbb81c8ff9c" (UID: "80d76766-a4a8-456f-89aa-7bbb81c8ff9c"). InnerVolumeSpecName "kube-api-access-ncnqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:06:25 crc kubenswrapper[4789]: I1124 12:06:25.292368 4789 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/80d76766-a4a8-456f-89aa-7bbb81c8ff9c-host\") on node \"crc\" DevicePath \"\"" Nov 24 12:06:25 crc kubenswrapper[4789]: I1124 12:06:25.292406 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncnqp\" (UniqueName: \"kubernetes.io/projected/80d76766-a4a8-456f-89aa-7bbb81c8ff9c-kube-api-access-ncnqp\") on node \"crc\" DevicePath \"\"" Nov 24 12:06:25 crc kubenswrapper[4789]: I1124 12:06:25.980600 4789 scope.go:117] "RemoveContainer" containerID="2253b3e23ffc2dfd14316b17e7e4571ba91c795afdc23c1e097ea9bacf8ee038" Nov 24 12:06:25 crc kubenswrapper[4789]: I1124 12:06:25.980814 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bk76d/crc-debug-z78nz" Nov 24 12:06:26 crc kubenswrapper[4789]: I1124 12:06:26.179223 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80d76766-a4a8-456f-89aa-7bbb81c8ff9c" path="/var/lib/kubelet/pods/80d76766-a4a8-456f-89aa-7bbb81c8ff9c/volumes" Nov 24 12:06:32 crc kubenswrapper[4789]: I1124 12:06:32.170766 4789 scope.go:117] "RemoveContainer" containerID="e0548ff4b57302caa6b7a362f06382ae8c3563988da3b37011e15cb6b4702acd" Nov 24 12:06:32 crc kubenswrapper[4789]: E1124 12:06:32.172199 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 12:06:47 crc kubenswrapper[4789]: I1124 12:06:47.171784 4789 scope.go:117] "RemoveContainer" containerID="e0548ff4b57302caa6b7a362f06382ae8c3563988da3b37011e15cb6b4702acd" Nov 24 12:06:47 crc kubenswrapper[4789]: E1124 12:06:47.172616 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 12:06:58 crc kubenswrapper[4789]: I1124 12:06:58.179290 4789 scope.go:117] "RemoveContainer" containerID="e0548ff4b57302caa6b7a362f06382ae8c3563988da3b37011e15cb6b4702acd" Nov 24 12:06:58 crc kubenswrapper[4789]: E1124 12:06:58.180888 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 12:07:04 crc kubenswrapper[4789]: I1124 12:07:04.469334 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-8656dd4674-kcg9p_6ea02afa-6da7-4e18-ae3f-7110a7b652f3/barbican-api/0.log" Nov 24 12:07:04 crc kubenswrapper[4789]: I1124 12:07:04.616094 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-8656dd4674-kcg9p_6ea02afa-6da7-4e18-ae3f-7110a7b652f3/barbican-api-log/0.log" Nov 24 12:07:04 crc kubenswrapper[4789]: I1124 12:07:04.676226 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6777ddb46-lfh4x_e6858fb3-9f7e-4855-abd4-23fdc894d153/barbican-keystone-listener/0.log" Nov 24 12:07:04 crc kubenswrapper[4789]: I1124 12:07:04.706975 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6777ddb46-lfh4x_e6858fb3-9f7e-4855-abd4-23fdc894d153/barbican-keystone-listener-log/0.log" Nov 24 12:07:04 crc kubenswrapper[4789]: I1124 12:07:04.875979 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7c6b6fc77f-wrz6s_6a3e8f3b-bcd4-4911-b365-e02bad3e8611/barbican-worker/0.log" Nov 24 12:07:04 crc kubenswrapper[4789]: I1124 12:07:04.943394 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7c6b6fc77f-wrz6s_6a3e8f3b-bcd4-4911-b365-e02bad3e8611/barbican-worker-log/0.log" Nov 24 12:07:05 crc kubenswrapper[4789]: I1124 12:07:05.171135 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-pccjk_d2940969-00db-4677-aaae-5d1d0a25a10a/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:07:05 crc kubenswrapper[4789]: I1124 12:07:05.280943 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0c9f2fa6-041c-485c-a636-af6766444f89/ceilometer-notification-agent/0.log" Nov 24 12:07:05 crc kubenswrapper[4789]: I1124 12:07:05.309647 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0c9f2fa6-041c-485c-a636-af6766444f89/ceilometer-central-agent/0.log" Nov 24 12:07:05 crc kubenswrapper[4789]: I1124 12:07:05.424962 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0c9f2fa6-041c-485c-a636-af6766444f89/proxy-httpd/0.log" Nov 24 12:07:05 crc kubenswrapper[4789]: I1124 12:07:05.475860 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0c9f2fa6-041c-485c-a636-af6766444f89/sg-core/0.log" Nov 24 12:07:05 crc kubenswrapper[4789]: I1124 12:07:05.563969 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-zh7m4_ad8a5468-ca7f-4a4e-a436-068f8f1256c3/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:07:05 crc kubenswrapper[4789]: I1124 12:07:05.712307 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_bb4a54b4-60e2-46ee-a063-e70757b214d2/cinder-api/0.log" Nov 24 12:07:05 crc kubenswrapper[4789]: I1124 12:07:05.774289 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_bb4a54b4-60e2-46ee-a063-e70757b214d2/cinder-api-log/0.log" Nov 24 12:07:06 crc kubenswrapper[4789]: I1124 12:07:06.007908 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_51c9e2b5-9521-4872-ab1a-f0981449f506/cinder-scheduler/0.log" Nov 24 12:07:06 crc kubenswrapper[4789]: I1124 12:07:06.217264 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_51c9e2b5-9521-4872-ab1a-f0981449f506/probe/0.log" Nov 24 12:07:06 crc kubenswrapper[4789]: I1124 12:07:06.363504 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-nf9h6_0c81a61c-6108-4aa5-b0de-fb73115e28cf/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:07:06 crc kubenswrapper[4789]: I1124 12:07:06.372964 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-j8vsn_34b9fe12-ae2c-4754-bf4a-4ab29c45f336/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:07:06 crc kubenswrapper[4789]: I1124 12:07:06.560570 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69fd9b48bc-fwmqb_0ffa9725-d57a-4cbd-8fbd-84702ae4799e/init/0.log" Nov 24 12:07:06 crc kubenswrapper[4789]: I1124 12:07:06.804353 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69fd9b48bc-fwmqb_0ffa9725-d57a-4cbd-8fbd-84702ae4799e/init/0.log" Nov 24 12:07:06 crc kubenswrapper[4789]: I1124 12:07:06.919301 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69fd9b48bc-fwmqb_0ffa9725-d57a-4cbd-8fbd-84702ae4799e/dnsmasq-dns/0.log" Nov 24 12:07:06 crc kubenswrapper[4789]: I1124 12:07:06.941366 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-djqbm_f4833b4b-25fe-4457-bb87-72efdfe17034/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:07:07 crc kubenswrapper[4789]: I1124 12:07:07.155983 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-784c4967d9-9h8jd_d23ab493-ddd0-4e41-aa4d-ed9de9256d1c/keystone-api/0.log" Nov 24 12:07:07 crc kubenswrapper[4789]: I1124 12:07:07.204971 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29399761-jl7gf_7681045f-7adf-4600-8e53-95b0d13f959b/keystone-cron/0.log" Nov 24 12:07:07 crc kubenswrapper[4789]: I1124 12:07:07.463020 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_8bfbe7a9-1f95-4bfa-b298-71ce199ba20c/kube-state-metrics/0.log" Nov 24 12:07:07 crc kubenswrapper[4789]: I1124 12:07:07.885499 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-85c5468469-htqfg_f2e0e6a2-b3ea-478b-b836-c20f7962266c/neutron-httpd/0.log" Nov 24 12:07:07 crc kubenswrapper[4789]: I1124 12:07:07.888758 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-85c5468469-htqfg_f2e0e6a2-b3ea-478b-b836-c20f7962266c/neutron-api/0.log" Nov 24 12:07:08 crc kubenswrapper[4789]: I1124 12:07:08.452865 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c2a9a39a-cd0e-49d0-a161-065526d89b49/nova-api-log/0.log" Nov 24 12:07:08 crc kubenswrapper[4789]: I1124 12:07:08.534381 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c2a9a39a-cd0e-49d0-a161-065526d89b49/nova-api-api/0.log" Nov 24 12:07:08 crc kubenswrapper[4789]: I1124 12:07:08.884785 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_68cf5f04-f863-4ee8-89e2-fe21038afe96/nova-cell0-conductor-conductor/0.log" Nov 24 12:07:09 crc kubenswrapper[4789]: I1124 12:07:09.004904 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_51167964-7234-4713-aef7-4f75548e9040/nova-cell1-conductor-conductor/0.log" Nov 24 12:07:09 crc kubenswrapper[4789]: I1124 12:07:09.210951 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_6a12e5d7-5339-4a7b-a9d1-0355b3b2fd62/nova-cell1-novncproxy-novncproxy/0.log" Nov 24 12:07:09 crc kubenswrapper[4789]: I1124 12:07:09.479879 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_0ca2367e-056b-4136-98ec-d53805416c09/nova-metadata-log/0.log" Nov 24 12:07:10 crc kubenswrapper[4789]: I1124 12:07:10.048493 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_7f04a406-8a85-4850-9611-311d3229b127/nova-scheduler-scheduler/0.log" Nov 24 12:07:10 crc kubenswrapper[4789]: I1124 12:07:10.113567 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9f6dd80c-3e9a-4ee6-83f8-40195165ec1c/mysql-bootstrap/0.log" Nov 24 12:07:10 crc kubenswrapper[4789]: I1124 12:07:10.322240 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_0ca2367e-056b-4136-98ec-d53805416c09/nova-metadata-metadata/0.log" Nov 24 12:07:10 crc kubenswrapper[4789]: I1124 12:07:10.378776 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9f6dd80c-3e9a-4ee6-83f8-40195165ec1c/galera/0.log" Nov 24 12:07:10 crc kubenswrapper[4789]: I1124 12:07:10.388110 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9f6dd80c-3e9a-4ee6-83f8-40195165ec1c/mysql-bootstrap/0.log" Nov 24 12:07:10 crc kubenswrapper[4789]: I1124 12:07:10.635029 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e6236001-96b0-4425-9f1f-eb84778d290a/mysql-bootstrap/0.log" Nov 24 12:07:10 crc kubenswrapper[4789]: I1124 12:07:10.838972 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e6236001-96b0-4425-9f1f-eb84778d290a/galera/0.log" Nov 24 12:07:10 crc kubenswrapper[4789]: I1124 12:07:10.849873 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e6236001-96b0-4425-9f1f-eb84778d290a/mysql-bootstrap/0.log" Nov 24 12:07:10 crc kubenswrapper[4789]: I1124 12:07:10.909842 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_06801047-ac5f-4da6-a843-3c064e628c38/openstackclient/0.log" Nov 24 12:07:11 crc kubenswrapper[4789]: I1124 12:07:11.087401 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-fm6r6_9d616a72-acce-41db-9107-142979aadf1f/openstack-network-exporter/0.log" Nov 24 12:07:11 crc kubenswrapper[4789]: I1124 12:07:11.201161 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-4tbr6_315d6386-62b1-4775-8185-2814e6b91bf5/ovsdb-server-init/0.log" Nov 24 12:07:11 crc kubenswrapper[4789]: I1124 12:07:11.389569 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-4tbr6_315d6386-62b1-4775-8185-2814e6b91bf5/ovs-vswitchd/0.log" Nov 24 12:07:11 crc kubenswrapper[4789]: I1124 12:07:11.470229 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-4tbr6_315d6386-62b1-4775-8185-2814e6b91bf5/ovsdb-server-init/0.log" Nov 24 12:07:11 crc kubenswrapper[4789]: I1124 12:07:11.593786 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-4tbr6_315d6386-62b1-4775-8185-2814e6b91bf5/ovsdb-server/0.log" Nov 24 12:07:11 crc kubenswrapper[4789]: I1124 12:07:11.613751 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-zh2n4_c77484cd-66ed-4471-9136-5e44eadd28ad/ovn-controller/0.log" Nov 24 12:07:11 crc kubenswrapper[4789]: I1124 12:07:11.816198 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_8dad9e06-c4ff-46fd-9864-a6cd81ad08db/ovn-northd/0.log" Nov 24 12:07:11 crc kubenswrapper[4789]: I1124 12:07:11.859416 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_8dad9e06-c4ff-46fd-9864-a6cd81ad08db/openstack-network-exporter/0.log" Nov 24 12:07:12 crc kubenswrapper[4789]: I1124 12:07:12.129613 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_9a18067c-f6d5-4650-897e-ec8e249b0e8b/openstack-network-exporter/0.log" Nov 24 12:07:12 crc kubenswrapper[4789]: I1124 12:07:12.142225 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_9a18067c-f6d5-4650-897e-ec8e249b0e8b/ovsdbserver-nb/0.log" Nov 24 12:07:12 crc kubenswrapper[4789]: I1124 12:07:12.170041 4789 scope.go:117] "RemoveContainer" containerID="e0548ff4b57302caa6b7a362f06382ae8c3563988da3b37011e15cb6b4702acd" Nov 24 12:07:12 crc kubenswrapper[4789]: E1124 12:07:12.170283 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 12:07:12 crc kubenswrapper[4789]: I1124 12:07:12.353560 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_77772f5a-c498-46a2-861c-8145c554f262/openstack-network-exporter/0.log" Nov 24 12:07:12 crc kubenswrapper[4789]: I1124 12:07:12.412983 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_77772f5a-c498-46a2-861c-8145c554f262/ovsdbserver-sb/0.log" Nov 24 12:07:12 crc kubenswrapper[4789]: I1124 12:07:12.434332 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-546dc675b-x2vpf_27b80dec-87d3-4357-a667-60524f89de21/placement-api/0.log" Nov 24 12:07:12 crc kubenswrapper[4789]: I1124 12:07:12.637752 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-546dc675b-x2vpf_27b80dec-87d3-4357-a667-60524f89de21/placement-log/0.log" Nov 24 12:07:12 crc kubenswrapper[4789]: I1124 12:07:12.707489 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1652b281-174f-466f-9b1b-52006fe58620/setup-container/0.log" Nov 24 12:07:12 crc kubenswrapper[4789]: I1124 12:07:12.854359 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1652b281-174f-466f-9b1b-52006fe58620/setup-container/0.log" Nov 24 12:07:12 crc kubenswrapper[4789]: I1124 12:07:12.885312 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1652b281-174f-466f-9b1b-52006fe58620/rabbitmq/0.log" Nov 24 12:07:12 crc kubenswrapper[4789]: I1124 12:07:12.992876 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_61dd768a-2e14-4e8f-89da-0feeb90b9796/setup-container/0.log" Nov 24 12:07:13 crc kubenswrapper[4789]: I1124 12:07:13.335893 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-cgfs7_96bc0bc5-e929-4c6f-b7eb-e0d2a982dc0e/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:07:13 crc kubenswrapper[4789]: I1124 12:07:13.340076 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_61dd768a-2e14-4e8f-89da-0feeb90b9796/setup-container/0.log" Nov 24 12:07:13 crc kubenswrapper[4789]: I1124 12:07:13.356313 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_61dd768a-2e14-4e8f-89da-0feeb90b9796/rabbitmq/0.log" Nov 24 12:07:13 crc kubenswrapper[4789]: I1124 12:07:13.691626 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-tkb8q_75ad7df5-1344-4081-a222-62419ecefc52/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:07:13 crc kubenswrapper[4789]: I1124 12:07:13.692326 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-bbkm6_9fa6d7a2-c7df-413c-8a31-3d7e76031554/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:07:13 crc kubenswrapper[4789]: I1124 12:07:13.892783 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-59qn5_a71628fe-aed3-4023-b18c-8b89d60fabac/ssh-known-hosts-edpm-deployment/0.log" Nov 24 12:07:14 crc kubenswrapper[4789]: I1124 12:07:14.058187 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-5fkhb_6ac9a80b-ec9c-43cc-b16d-d8113619caec/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 12:07:14 crc kubenswrapper[4789]: I1124 12:07:14.813816 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_6583a8fe-db60-4eac-8bd0-32278517eff8/memcached/0.log" Nov 24 12:07:25 crc kubenswrapper[4789]: I1124 12:07:25.169786 4789 scope.go:117] "RemoveContainer" containerID="e0548ff4b57302caa6b7a362f06382ae8c3563988da3b37011e15cb6b4702acd" Nov 24 12:07:25 crc kubenswrapper[4789]: E1124 12:07:25.170715 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 12:07:34 crc kubenswrapper[4789]: I1124 12:07:34.320483 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-75fb479bcc-4n8q6_0b73227d-0b7b-468c-a0c3-fefa29209aa0/kube-rbac-proxy/0.log" Nov 24 12:07:34 crc kubenswrapper[4789]: I1124 12:07:34.427985 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-75fb479bcc-4n8q6_0b73227d-0b7b-468c-a0c3-fefa29209aa0/manager/0.log" Nov 24 12:07:34 crc kubenswrapper[4789]: I1124 12:07:34.570356 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6498cbf48f-q5gj6_d7389a19-508e-48aa-81f3-25fc9fd76fbf/kube-rbac-proxy/0.log" Nov 24 12:07:34 crc kubenswrapper[4789]: I1124 12:07:34.615058 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6498cbf48f-q5gj6_d7389a19-508e-48aa-81f3-25fc9fd76fbf/manager/0.log" Nov 24 12:07:34 crc kubenswrapper[4789]: I1124 12:07:34.789452 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dcfa20349335657767e217cb0195ee063c9c2b9385e7fe3e98d7962d23f7x95_ab00850e-e7eb-4a71-ae4a-54c3b3d085f1/util/0.log" Nov 24 12:07:34 crc kubenswrapper[4789]: I1124 12:07:34.930176 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dcfa20349335657767e217cb0195ee063c9c2b9385e7fe3e98d7962d23f7x95_ab00850e-e7eb-4a71-ae4a-54c3b3d085f1/util/0.log" Nov 24 12:07:34 crc kubenswrapper[4789]: I1124 12:07:34.967061 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dcfa20349335657767e217cb0195ee063c9c2b9385e7fe3e98d7962d23f7x95_ab00850e-e7eb-4a71-ae4a-54c3b3d085f1/pull/0.log" Nov 24 12:07:34 crc kubenswrapper[4789]: I1124 12:07:34.997483 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dcfa20349335657767e217cb0195ee063c9c2b9385e7fe3e98d7962d23f7x95_ab00850e-e7eb-4a71-ae4a-54c3b3d085f1/pull/0.log" Nov 24 12:07:35 crc kubenswrapper[4789]: I1124 12:07:35.164440 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dcfa20349335657767e217cb0195ee063c9c2b9385e7fe3e98d7962d23f7x95_ab00850e-e7eb-4a71-ae4a-54c3b3d085f1/extract/0.log" Nov 24 12:07:35 crc kubenswrapper[4789]: I1124 12:07:35.201498 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dcfa20349335657767e217cb0195ee063c9c2b9385e7fe3e98d7962d23f7x95_ab00850e-e7eb-4a71-ae4a-54c3b3d085f1/util/0.log" Nov 24 12:07:35 crc kubenswrapper[4789]: I1124 12:07:35.243106 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dcfa20349335657767e217cb0195ee063c9c2b9385e7fe3e98d7962d23f7x95_ab00850e-e7eb-4a71-ae4a-54c3b3d085f1/pull/0.log" Nov 24 12:07:35 crc kubenswrapper[4789]: I1124 12:07:35.365223 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-767ccfd65f-vcqnx_d6f07f19-826c-41c8-8861-97ffffe88f6e/kube-rbac-proxy/0.log" Nov 24 12:07:35 crc kubenswrapper[4789]: I1124 12:07:35.437973 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-767ccfd65f-vcqnx_d6f07f19-826c-41c8-8861-97ffffe88f6e/manager/0.log" Nov 24 12:07:35 crc kubenswrapper[4789]: I1124 12:07:35.492069 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7969689c84-mt9mk_f0a7631e-95a4-4bb8-aa13-72b02c833aba/kube-rbac-proxy/0.log" Nov 24 12:07:35 crc kubenswrapper[4789]: I1124 12:07:35.625108 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7969689c84-mt9mk_f0a7631e-95a4-4bb8-aa13-72b02c833aba/manager/0.log" Nov 24 12:07:35 crc kubenswrapper[4789]: I1124 12:07:35.692530 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-56f54d6746-vrsx6_95a81c85-d5ed-49a2-a24d-1aa8f5ed1aef/kube-rbac-proxy/0.log" Nov 24 12:07:35 crc kubenswrapper[4789]: I1124 12:07:35.736704 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-56f54d6746-vrsx6_95a81c85-d5ed-49a2-a24d-1aa8f5ed1aef/manager/0.log" Nov 24 12:07:35 crc kubenswrapper[4789]: I1124 12:07:35.860287 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-598f69df5d-hxrfg_74fd2f2b-e4c9-465b-928f-adbe316321a4/manager/0.log" Nov 24 12:07:35 crc kubenswrapper[4789]: I1124 12:07:35.867967 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-598f69df5d-hxrfg_74fd2f2b-e4c9-465b-928f-adbe316321a4/kube-rbac-proxy/0.log" Nov 24 12:07:36 crc kubenswrapper[4789]: I1124 12:07:36.010081 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6dd8864d7c-g4kfx_915814e7-0e49-4bec-8403-6e95d1008e72/kube-rbac-proxy/0.log" Nov 24 12:07:36 crc kubenswrapper[4789]: I1124 12:07:36.215010 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6dd8864d7c-g4kfx_915814e7-0e49-4bec-8403-6e95d1008e72/manager/0.log" Nov 24 12:07:36 crc kubenswrapper[4789]: I1124 12:07:36.233329 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-99b499f4-tfdds_661a8eee-259e-40e5-83c5-7d5b78981eb5/kube-rbac-proxy/0.log" Nov 24 12:07:36 crc kubenswrapper[4789]: I1124 12:07:36.281693 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-99b499f4-tfdds_661a8eee-259e-40e5-83c5-7d5b78981eb5/manager/0.log" Nov 24 12:07:36 crc kubenswrapper[4789]: I1124 12:07:36.419927 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7454b96578-5wh6z_89488e43-e2eb-44a1-ac26-fcb0c87047f6/kube-rbac-proxy/0.log" Nov 24 12:07:36 crc kubenswrapper[4789]: I1124 12:07:36.481957 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7454b96578-5wh6z_89488e43-e2eb-44a1-ac26-fcb0c87047f6/manager/0.log" Nov 24 12:07:36 crc kubenswrapper[4789]: I1124 12:07:36.595240 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58f887965d-kjb9s_01a1d054-85ac-46b5-94f1-7ec657e0658f/kube-rbac-proxy/0.log" Nov 24 12:07:36 crc kubenswrapper[4789]: I1124 12:07:36.638671 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58f887965d-kjb9s_01a1d054-85ac-46b5-94f1-7ec657e0658f/manager/0.log" Nov 24 12:07:36 crc kubenswrapper[4789]: I1124 12:07:36.792067 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-54b5986bb8-9vtqg_92381aad-0739-4a44-948f-c7dc91808a89/kube-rbac-proxy/0.log" Nov 24 12:07:36 crc kubenswrapper[4789]: I1124 12:07:36.869893 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-54b5986bb8-9vtqg_92381aad-0739-4a44-948f-c7dc91808a89/manager/0.log" Nov 24 12:07:36 crc kubenswrapper[4789]: I1124 12:07:36.920013 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78bd47f458-65j74_f1cdfa4d-b1e5-48c3-b4d7-1b044bfe9592/kube-rbac-proxy/0.log" Nov 24 12:07:37 crc kubenswrapper[4789]: I1124 12:07:37.044881 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78bd47f458-65j74_f1cdfa4d-b1e5-48c3-b4d7-1b044bfe9592/manager/0.log" Nov 24 12:07:37 crc kubenswrapper[4789]: I1124 12:07:37.183280 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-cfbb9c588-zq9m5_6a05bbf2-98dc-4086-ac3e-8a8cf5bd7dc9/kube-rbac-proxy/0.log" Nov 24 12:07:37 crc kubenswrapper[4789]: I1124 12:07:37.254493 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-cfbb9c588-zq9m5_6a05bbf2-98dc-4086-ac3e-8a8cf5bd7dc9/manager/0.log" Nov 24 12:07:37 crc kubenswrapper[4789]: I1124 12:07:37.439585 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-54cfbf4c7d-jk4w9_97d7da9b-f14e-4d8b-9ab0-5607a2a556cf/kube-rbac-proxy/0.log" Nov 24 12:07:37 crc kubenswrapper[4789]: I1124 12:07:37.451616 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-54cfbf4c7d-jk4w9_97d7da9b-f14e-4d8b-9ab0-5607a2a556cf/manager/0.log" Nov 24 12:07:37 crc kubenswrapper[4789]: I1124 12:07:37.561233 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-8c7444f48-vq62h_123b4cfb-8a48-4e91-8cb7-20a22b3e6b16/kube-rbac-proxy/0.log" Nov 24 12:07:37 crc kubenswrapper[4789]: I1124 12:07:37.604311 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-8c7444f48-vq62h_123b4cfb-8a48-4e91-8cb7-20a22b3e6b16/manager/0.log" Nov 24 12:07:37 crc kubenswrapper[4789]: I1124 12:07:37.737272 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7cf84c8b4f-2hxj7_d5ae6f26-3332-445e-a58b-bc1ff6e5b6d1/kube-rbac-proxy/0.log" Nov 24 12:07:38 crc kubenswrapper[4789]: I1124 12:07:38.079122 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-6bb74f6778-sddqf_a6a8da19-ed48-499a-b951-722c2294134c/kube-rbac-proxy/0.log" Nov 24 12:07:38 crc kubenswrapper[4789]: I1124 12:07:38.193921 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-6bb74f6778-sddqf_a6a8da19-ed48-499a-b951-722c2294134c/operator/0.log" Nov 24 12:07:38 crc kubenswrapper[4789]: I1124 12:07:38.350513 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-ff2g6_684283c3-7c6e-4252-a66c-19cb552eeb56/registry-server/0.log" Nov 24 12:07:38 crc kubenswrapper[4789]: I1124 12:07:38.508342 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-54fc5f65b7-tf44z_40d059bb-9e0e-4bba-bea5-866a064bb150/kube-rbac-proxy/0.log" Nov 24 12:07:38 crc kubenswrapper[4789]: I1124 12:07:38.535311 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7cf84c8b4f-2hxj7_d5ae6f26-3332-445e-a58b-bc1ff6e5b6d1/manager/0.log" Nov 24 12:07:38 crc kubenswrapper[4789]: I1124 12:07:38.572966 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-54fc5f65b7-tf44z_40d059bb-9e0e-4bba-bea5-866a064bb150/manager/0.log" Nov 24 12:07:38 crc kubenswrapper[4789]: I1124 12:07:38.692378 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b797b8dff-kdkrp_879f31f8-27f9-4f20-a9cd-b67373fac926/kube-rbac-proxy/0.log" Nov 24 12:07:38 crc kubenswrapper[4789]: I1124 12:07:38.736926 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b797b8dff-kdkrp_879f31f8-27f9-4f20-a9cd-b67373fac926/manager/0.log" Nov 24 12:07:38 crc kubenswrapper[4789]: I1124 12:07:38.747998 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-djvqp_553cfbf3-1b3c-4004-9bf9-4b20de969652/operator/0.log" Nov 24 12:07:38 crc kubenswrapper[4789]: I1124 12:07:38.886721 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d656998f4-v4frd_57736f24-6289-42e1-918a-cffd058c0e7a/kube-rbac-proxy/0.log" Nov 24 12:07:38 crc kubenswrapper[4789]: I1124 12:07:38.974804 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d656998f4-v4frd_57736f24-6289-42e1-918a-cffd058c0e7a/manager/0.log" Nov 24 12:07:39 crc kubenswrapper[4789]: I1124 12:07:39.054691 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6d4bf84b58-8xxh4_42125341-88db-4554-abe6-55807d7d54fa/kube-rbac-proxy/0.log" Nov 24 12:07:39 crc kubenswrapper[4789]: I1124 12:07:39.060660 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6d4bf84b58-8xxh4_42125341-88db-4554-abe6-55807d7d54fa/manager/0.log" Nov 24 12:07:39 crc kubenswrapper[4789]: I1124 12:07:39.150075 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-b4c496f69-ttb9w_56a28c68-0fee-4c04-9461-7f4f4cb166a8/kube-rbac-proxy/0.log" Nov 24 12:07:39 crc kubenswrapper[4789]: I1124 12:07:39.169499 4789 scope.go:117] "RemoveContainer" containerID="e0548ff4b57302caa6b7a362f06382ae8c3563988da3b37011e15cb6b4702acd" Nov 24 12:07:39 crc kubenswrapper[4789]: E1124 12:07:39.169898 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 12:07:39 crc kubenswrapper[4789]: I1124 12:07:39.254731 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-b4c496f69-ttb9w_56a28c68-0fee-4c04-9461-7f4f4cb166a8/manager/0.log" Nov 24 12:07:39 crc kubenswrapper[4789]: I1124 12:07:39.292801 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-8c6448b9f-jwwfg_a7de15ed-b91f-490d-bc42-e41e929a22d1/kube-rbac-proxy/0.log" Nov 24 12:07:39 crc kubenswrapper[4789]: I1124 12:07:39.331065 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-8c6448b9f-jwwfg_a7de15ed-b91f-490d-bc42-e41e929a22d1/manager/0.log" Nov 24 12:07:54 crc kubenswrapper[4789]: I1124 12:07:54.169195 4789 scope.go:117] "RemoveContainer" containerID="e0548ff4b57302caa6b7a362f06382ae8c3563988da3b37011e15cb6b4702acd" Nov 24 12:07:54 crc kubenswrapper[4789]: E1124 12:07:54.170081 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 12:07:54 crc kubenswrapper[4789]: I1124 12:07:54.953263 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-fxzq9_a6a654d4-4e05-4848-ab14-624f78b93cfa/control-plane-machine-set-operator/0.log" Nov 24 12:07:55 crc kubenswrapper[4789]: I1124 12:07:55.163222 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-klw64_d90e94ec-ea22-4ba7-a0b0-7b636dcccf9c/kube-rbac-proxy/0.log" Nov 24 12:07:55 crc kubenswrapper[4789]: I1124 12:07:55.174900 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-klw64_d90e94ec-ea22-4ba7-a0b0-7b636dcccf9c/machine-api-operator/0.log" Nov 24 12:08:05 crc kubenswrapper[4789]: I1124 12:08:05.170400 4789 scope.go:117] "RemoveContainer" containerID="e0548ff4b57302caa6b7a362f06382ae8c3563988da3b37011e15cb6b4702acd" Nov 24 12:08:05 crc kubenswrapper[4789]: E1124 12:08:05.171187 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 12:08:06 crc kubenswrapper[4789]: I1124 12:08:06.591383 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-46llj_8b96cfd6-4b27-48b1-91b5-26a6cef7c9e6/cert-manager-controller/0.log" Nov 24 12:08:06 crc kubenswrapper[4789]: I1124 12:08:06.743676 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-g5g5j_47a486f0-4af5-4bb7-acf5-6b827e216fde/cert-manager-cainjector/0.log" Nov 24 12:08:06 crc kubenswrapper[4789]: I1124 12:08:06.798171 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-m6j4q_00b5b4f8-e390-4c4f-a1dc-b8c13860b689/cert-manager-webhook/0.log" Nov 24 12:08:18 crc kubenswrapper[4789]: I1124 12:08:18.178987 4789 scope.go:117] "RemoveContainer" containerID="e0548ff4b57302caa6b7a362f06382ae8c3563988da3b37011e15cb6b4702acd" Nov 24 12:08:18 crc kubenswrapper[4789]: E1124 12:08:18.181780 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 12:08:18 crc kubenswrapper[4789]: I1124 12:08:18.679900 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-znx64_e20c522e-8987-4a5b-84a4-c40098d2e179/nmstate-console-plugin/0.log" Nov 24 12:08:18 crc kubenswrapper[4789]: I1124 12:08:18.782396 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-tc6cw_b8e5c0f4-380c-43d6-be7e-335586100004/nmstate-handler/0.log" Nov 24 12:08:18 crc kubenswrapper[4789]: I1124 12:08:18.908374 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-cr456_7912ee90-6561-4ccb-be26-e14a7b5d4215/kube-rbac-proxy/0.log" Nov 24 12:08:18 crc kubenswrapper[4789]: I1124 12:08:18.945600 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-cr456_7912ee90-6561-4ccb-be26-e14a7b5d4215/nmstate-metrics/0.log" Nov 24 12:08:19 crc kubenswrapper[4789]: I1124 12:08:19.081715 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-pf9g5_f1714436-b482-4a7a-9ea2-7ef512ac500c/nmstate-operator/0.log" Nov 24 12:08:19 crc kubenswrapper[4789]: I1124 12:08:19.147013 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-lgcsz_37289b7f-66b0-4c52-98d7-2bbd918a4f4d/nmstate-webhook/0.log" Nov 24 12:08:29 crc kubenswrapper[4789]: I1124 12:08:29.169484 4789 scope.go:117] "RemoveContainer" containerID="e0548ff4b57302caa6b7a362f06382ae8c3563988da3b37011e15cb6b4702acd" Nov 24 12:08:29 crc kubenswrapper[4789]: E1124 12:08:29.170345 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 12:08:32 crc kubenswrapper[4789]: I1124 12:08:32.811244 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-trm8h_de5e9675-d7e9-4a4f-ba3d-000b5cabd4f7/kube-rbac-proxy/0.log" Nov 24 12:08:32 crc kubenswrapper[4789]: I1124 12:08:32.916330 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-trm8h_de5e9675-d7e9-4a4f-ba3d-000b5cabd4f7/controller/0.log" Nov 24 12:08:33 crc kubenswrapper[4789]: I1124 12:08:33.025723 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvbfg_937c8174-492c-4125-9fa3-0f62b450e1e3/cp-frr-files/0.log" Nov 24 12:08:33 crc kubenswrapper[4789]: I1124 12:08:33.211562 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvbfg_937c8174-492c-4125-9fa3-0f62b450e1e3/cp-frr-files/0.log" Nov 24 12:08:33 crc kubenswrapper[4789]: I1124 12:08:33.277403 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvbfg_937c8174-492c-4125-9fa3-0f62b450e1e3/cp-metrics/0.log" Nov 24 12:08:33 crc kubenswrapper[4789]: I1124 12:08:33.294065 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvbfg_937c8174-492c-4125-9fa3-0f62b450e1e3/cp-reloader/0.log" Nov 24 12:08:33 crc kubenswrapper[4789]: I1124 12:08:33.303600 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvbfg_937c8174-492c-4125-9fa3-0f62b450e1e3/cp-reloader/0.log" Nov 24 12:08:33 crc kubenswrapper[4789]: I1124 12:08:33.456208 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvbfg_937c8174-492c-4125-9fa3-0f62b450e1e3/cp-metrics/0.log" Nov 24 12:08:33 crc kubenswrapper[4789]: I1124 12:08:33.482109 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvbfg_937c8174-492c-4125-9fa3-0f62b450e1e3/cp-frr-files/0.log" Nov 24 12:08:33 crc kubenswrapper[4789]: I1124 12:08:33.532179 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvbfg_937c8174-492c-4125-9fa3-0f62b450e1e3/cp-reloader/0.log" Nov 24 12:08:33 crc kubenswrapper[4789]: I1124 12:08:33.567422 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvbfg_937c8174-492c-4125-9fa3-0f62b450e1e3/cp-metrics/0.log" Nov 24 12:08:33 crc kubenswrapper[4789]: I1124 12:08:33.691524 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvbfg_937c8174-492c-4125-9fa3-0f62b450e1e3/cp-frr-files/0.log" Nov 24 12:08:33 crc kubenswrapper[4789]: I1124 12:08:33.695988 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvbfg_937c8174-492c-4125-9fa3-0f62b450e1e3/cp-reloader/0.log" Nov 24 12:08:33 crc kubenswrapper[4789]: I1124 12:08:33.740409 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvbfg_937c8174-492c-4125-9fa3-0f62b450e1e3/cp-metrics/0.log" Nov 24 12:08:33 crc kubenswrapper[4789]: I1124 12:08:33.780510 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvbfg_937c8174-492c-4125-9fa3-0f62b450e1e3/controller/0.log" Nov 24 12:08:33 crc kubenswrapper[4789]: I1124 12:08:33.912980 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvbfg_937c8174-492c-4125-9fa3-0f62b450e1e3/frr-metrics/0.log" Nov 24 12:08:33 crc kubenswrapper[4789]: I1124 12:08:33.964116 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvbfg_937c8174-492c-4125-9fa3-0f62b450e1e3/kube-rbac-proxy/0.log" Nov 24 12:08:34 crc kubenswrapper[4789]: I1124 12:08:34.026959 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvbfg_937c8174-492c-4125-9fa3-0f62b450e1e3/kube-rbac-proxy-frr/0.log" Nov 24 12:08:34 crc kubenswrapper[4789]: I1124 12:08:34.213872 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvbfg_937c8174-492c-4125-9fa3-0f62b450e1e3/reloader/0.log" Nov 24 12:08:34 crc kubenswrapper[4789]: I1124 12:08:34.291757 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-bzw25_ea1421cc-29d8-43a2-898f-e12e9978b1fa/frr-k8s-webhook-server/0.log" Nov 24 12:08:34 crc kubenswrapper[4789]: I1124 12:08:34.531485 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5c78669894-4cs4c_79ff7401-87f7-494c-8b09-aa9fc59a934b/manager/0.log" Nov 24 12:08:34 crc kubenswrapper[4789]: I1124 12:08:34.764357 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvbfg_937c8174-492c-4125-9fa3-0f62b450e1e3/frr/0.log" Nov 24 12:08:34 crc kubenswrapper[4789]: I1124 12:08:34.797769 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-79c46fb6f4-rtbcf_689fdd74-d64c-431d-a036-babb90542dd8/webhook-server/0.log" Nov 24 12:08:34 crc kubenswrapper[4789]: I1124 12:08:34.846410 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-fbrt2_c6ea6339-def1-4bf8-ba76-2dce73b451c7/kube-rbac-proxy/0.log" Nov 24 12:08:35 crc kubenswrapper[4789]: I1124 12:08:35.190036 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-fbrt2_c6ea6339-def1-4bf8-ba76-2dce73b451c7/speaker/0.log" Nov 24 12:08:42 crc kubenswrapper[4789]: I1124 12:08:42.169687 4789 scope.go:117] "RemoveContainer" containerID="e0548ff4b57302caa6b7a362f06382ae8c3563988da3b37011e15cb6b4702acd" Nov 24 12:08:42 crc kubenswrapper[4789]: E1124 12:08:42.170354 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 12:08:46 crc kubenswrapper[4789]: I1124 12:08:46.677717 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772enr8tx_f6471629-48a8-49da-be9a-ad77354e63b1/util/0.log" Nov 24 12:08:46 crc kubenswrapper[4789]: I1124 12:08:46.847139 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772enr8tx_f6471629-48a8-49da-be9a-ad77354e63b1/pull/0.log" Nov 24 12:08:46 crc kubenswrapper[4789]: I1124 12:08:46.885881 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772enr8tx_f6471629-48a8-49da-be9a-ad77354e63b1/util/0.log" Nov 24 12:08:46 crc kubenswrapper[4789]: I1124 12:08:46.911727 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772enr8tx_f6471629-48a8-49da-be9a-ad77354e63b1/pull/0.log" Nov 24 12:08:47 crc kubenswrapper[4789]: I1124 12:08:47.129443 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772enr8tx_f6471629-48a8-49da-be9a-ad77354e63b1/util/0.log" Nov 24 12:08:47 crc kubenswrapper[4789]: I1124 12:08:47.147843 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772enr8tx_f6471629-48a8-49da-be9a-ad77354e63b1/pull/0.log" Nov 24 12:08:47 crc kubenswrapper[4789]: I1124 12:08:47.159488 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772enr8tx_f6471629-48a8-49da-be9a-ad77354e63b1/extract/0.log" Nov 24 12:08:47 crc kubenswrapper[4789]: I1124 12:08:47.338995 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4nkmf_6b306b4d-a5ff-4c9c-b070-967f57a7e0fc/extract-utilities/0.log" Nov 24 12:08:47 crc kubenswrapper[4789]: I1124 12:08:47.508250 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4nkmf_6b306b4d-a5ff-4c9c-b070-967f57a7e0fc/extract-content/0.log" Nov 24 12:08:47 crc kubenswrapper[4789]: I1124 12:08:47.567932 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4nkmf_6b306b4d-a5ff-4c9c-b070-967f57a7e0fc/extract-utilities/0.log" Nov 24 12:08:47 crc kubenswrapper[4789]: I1124 12:08:47.596285 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4nkmf_6b306b4d-a5ff-4c9c-b070-967f57a7e0fc/extract-content/0.log" Nov 24 12:08:47 crc kubenswrapper[4789]: I1124 12:08:47.744258 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4nkmf_6b306b4d-a5ff-4c9c-b070-967f57a7e0fc/extract-utilities/0.log" Nov 24 12:08:47 crc kubenswrapper[4789]: I1124 12:08:47.747892 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4nkmf_6b306b4d-a5ff-4c9c-b070-967f57a7e0fc/extract-content/0.log" Nov 24 12:08:48 crc kubenswrapper[4789]: I1124 12:08:48.002573 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jfrbf_023c49aa-b48c-4320-a70f-3d9d969fa712/extract-utilities/0.log" Nov 24 12:08:48 crc kubenswrapper[4789]: I1124 12:08:48.178332 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4nkmf_6b306b4d-a5ff-4c9c-b070-967f57a7e0fc/registry-server/0.log" Nov 24 12:08:48 crc kubenswrapper[4789]: I1124 12:08:48.293631 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jfrbf_023c49aa-b48c-4320-a70f-3d9d969fa712/extract-utilities/0.log" Nov 24 12:08:48 crc kubenswrapper[4789]: I1124 12:08:48.311815 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jfrbf_023c49aa-b48c-4320-a70f-3d9d969fa712/extract-content/0.log" Nov 24 12:08:48 crc kubenswrapper[4789]: I1124 12:08:48.326648 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jfrbf_023c49aa-b48c-4320-a70f-3d9d969fa712/extract-content/0.log" Nov 24 12:08:48 crc kubenswrapper[4789]: I1124 12:08:48.447625 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jfrbf_023c49aa-b48c-4320-a70f-3d9d969fa712/extract-utilities/0.log" Nov 24 12:08:48 crc kubenswrapper[4789]: I1124 12:08:48.477107 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jfrbf_023c49aa-b48c-4320-a70f-3d9d969fa712/extract-content/0.log" Nov 24 12:08:48 crc kubenswrapper[4789]: I1124 12:08:48.678259 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6pg4rd_97143caa-58b4-4d96-a4c7-9ec1bb364425/util/0.log" Nov 24 12:08:48 crc kubenswrapper[4789]: I1124 12:08:48.747873 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jfrbf_023c49aa-b48c-4320-a70f-3d9d969fa712/registry-server/0.log" Nov 24 12:08:48 crc kubenswrapper[4789]: I1124 12:08:48.857117 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6pg4rd_97143caa-58b4-4d96-a4c7-9ec1bb364425/util/0.log" Nov 24 12:08:48 crc kubenswrapper[4789]: I1124 12:08:48.897554 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6pg4rd_97143caa-58b4-4d96-a4c7-9ec1bb364425/pull/0.log" Nov 24 12:08:48 crc kubenswrapper[4789]: I1124 12:08:48.907804 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6pg4rd_97143caa-58b4-4d96-a4c7-9ec1bb364425/pull/0.log" Nov 24 12:08:49 crc kubenswrapper[4789]: I1124 12:08:49.150575 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6pg4rd_97143caa-58b4-4d96-a4c7-9ec1bb364425/util/0.log" Nov 24 12:08:49 crc kubenswrapper[4789]: I1124 12:08:49.155913 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6pg4rd_97143caa-58b4-4d96-a4c7-9ec1bb364425/extract/0.log" Nov 24 12:08:49 crc kubenswrapper[4789]: I1124 12:08:49.188993 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6pg4rd_97143caa-58b4-4d96-a4c7-9ec1bb364425/pull/0.log" Nov 24 12:08:49 crc kubenswrapper[4789]: I1124 12:08:49.394287 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-pfvts_ce9631bf-85d8-411c-8dc8-612ed608cd07/marketplace-operator/0.log" Nov 24 12:08:49 crc kubenswrapper[4789]: I1124 12:08:49.461078 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c7xwt_dd57300a-9489-4148-8c58-89477b5d9af4/extract-utilities/0.log" Nov 24 12:08:49 crc kubenswrapper[4789]: I1124 12:08:49.623967 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c7xwt_dd57300a-9489-4148-8c58-89477b5d9af4/extract-content/0.log" Nov 24 12:08:49 crc kubenswrapper[4789]: I1124 12:08:49.654173 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c7xwt_dd57300a-9489-4148-8c58-89477b5d9af4/extract-utilities/0.log" Nov 24 12:08:49 crc kubenswrapper[4789]: I1124 12:08:49.674142 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c7xwt_dd57300a-9489-4148-8c58-89477b5d9af4/extract-content/0.log" Nov 24 12:08:49 crc kubenswrapper[4789]: I1124 12:08:49.875901 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c7xwt_dd57300a-9489-4148-8c58-89477b5d9af4/extract-utilities/0.log" Nov 24 12:08:49 crc kubenswrapper[4789]: I1124 12:08:49.902559 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c7xwt_dd57300a-9489-4148-8c58-89477b5d9af4/extract-content/0.log" Nov 24 12:08:50 crc kubenswrapper[4789]: I1124 12:08:50.063937 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c7xwt_dd57300a-9489-4148-8c58-89477b5d9af4/registry-server/0.log" Nov 24 12:08:50 crc kubenswrapper[4789]: I1124 12:08:50.093670 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ttzds_a68178ee-eb32-4c58-b08c-ad7b2d2aefce/extract-utilities/0.log" Nov 24 12:08:50 crc kubenswrapper[4789]: I1124 12:08:50.289700 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ttzds_a68178ee-eb32-4c58-b08c-ad7b2d2aefce/extract-utilities/0.log" Nov 24 12:08:50 crc kubenswrapper[4789]: I1124 12:08:50.302420 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ttzds_a68178ee-eb32-4c58-b08c-ad7b2d2aefce/extract-content/0.log" Nov 24 12:08:50 crc kubenswrapper[4789]: I1124 12:08:50.330177 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ttzds_a68178ee-eb32-4c58-b08c-ad7b2d2aefce/extract-content/0.log" Nov 24 12:08:50 crc kubenswrapper[4789]: I1124 12:08:50.455422 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ttzds_a68178ee-eb32-4c58-b08c-ad7b2d2aefce/extract-utilities/0.log" Nov 24 12:08:50 crc kubenswrapper[4789]: I1124 12:08:50.492037 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ttzds_a68178ee-eb32-4c58-b08c-ad7b2d2aefce/extract-content/0.log" Nov 24 12:08:50 crc kubenswrapper[4789]: I1124 12:08:50.700252 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ttzds_a68178ee-eb32-4c58-b08c-ad7b2d2aefce/registry-server/0.log" Nov 24 12:08:57 crc kubenswrapper[4789]: I1124 12:08:57.169419 4789 scope.go:117] "RemoveContainer" containerID="e0548ff4b57302caa6b7a362f06382ae8c3563988da3b37011e15cb6b4702acd" Nov 24 12:08:57 crc kubenswrapper[4789]: E1124 12:08:57.170223 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 12:09:12 crc kubenswrapper[4789]: I1124 12:09:12.169524 4789 scope.go:117] "RemoveContainer" containerID="e0548ff4b57302caa6b7a362f06382ae8c3563988da3b37011e15cb6b4702acd" Nov 24 12:09:12 crc kubenswrapper[4789]: E1124 12:09:12.170303 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 12:09:24 crc kubenswrapper[4789]: I1124 12:09:24.171418 4789 scope.go:117] "RemoveContainer" containerID="e0548ff4b57302caa6b7a362f06382ae8c3563988da3b37011e15cb6b4702acd" Nov 24 12:09:24 crc kubenswrapper[4789]: E1124 12:09:24.172319 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 12:09:35 crc kubenswrapper[4789]: I1124 12:09:35.169226 4789 scope.go:117] "RemoveContainer" containerID="e0548ff4b57302caa6b7a362f06382ae8c3563988da3b37011e15cb6b4702acd" Nov 24 12:09:35 crc kubenswrapper[4789]: E1124 12:09:35.170127 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 12:09:46 crc kubenswrapper[4789]: I1124 12:09:46.169904 4789 scope.go:117] "RemoveContainer" containerID="e0548ff4b57302caa6b7a362f06382ae8c3563988da3b37011e15cb6b4702acd" Nov 24 12:09:46 crc kubenswrapper[4789]: E1124 12:09:46.170636 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 12:10:00 crc kubenswrapper[4789]: I1124 12:10:00.753586 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xkrpc"] Nov 24 12:10:00 crc kubenswrapper[4789]: E1124 12:10:00.754756 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80d76766-a4a8-456f-89aa-7bbb81c8ff9c" containerName="container-00" Nov 24 12:10:00 crc kubenswrapper[4789]: I1124 12:10:00.754776 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="80d76766-a4a8-456f-89aa-7bbb81c8ff9c" containerName="container-00" Nov 24 12:10:00 crc kubenswrapper[4789]: I1124 12:10:00.755364 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="80d76766-a4a8-456f-89aa-7bbb81c8ff9c" containerName="container-00" Nov 24 12:10:00 crc kubenswrapper[4789]: I1124 12:10:00.757907 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xkrpc" Nov 24 12:10:00 crc kubenswrapper[4789]: I1124 12:10:00.781863 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xkrpc"] Nov 24 12:10:00 crc kubenswrapper[4789]: I1124 12:10:00.944202 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11f82c23-a805-489a-b0ab-44d33cf336c1-utilities\") pod \"community-operators-xkrpc\" (UID: \"11f82c23-a805-489a-b0ab-44d33cf336c1\") " pod="openshift-marketplace/community-operators-xkrpc" Nov 24 12:10:00 crc kubenswrapper[4789]: I1124 12:10:00.944315 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11f82c23-a805-489a-b0ab-44d33cf336c1-catalog-content\") pod \"community-operators-xkrpc\" (UID: \"11f82c23-a805-489a-b0ab-44d33cf336c1\") " pod="openshift-marketplace/community-operators-xkrpc" Nov 24 12:10:00 crc kubenswrapper[4789]: I1124 12:10:00.944376 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxwbj\" (UniqueName: \"kubernetes.io/projected/11f82c23-a805-489a-b0ab-44d33cf336c1-kube-api-access-kxwbj\") pod \"community-operators-xkrpc\" (UID: \"11f82c23-a805-489a-b0ab-44d33cf336c1\") " pod="openshift-marketplace/community-operators-xkrpc" Nov 24 12:10:01 crc kubenswrapper[4789]: I1124 12:10:01.045566 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11f82c23-a805-489a-b0ab-44d33cf336c1-utilities\") pod \"community-operators-xkrpc\" (UID: \"11f82c23-a805-489a-b0ab-44d33cf336c1\") " pod="openshift-marketplace/community-operators-xkrpc" Nov 24 12:10:01 crc kubenswrapper[4789]: I1124 12:10:01.046156 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11f82c23-a805-489a-b0ab-44d33cf336c1-catalog-content\") pod \"community-operators-xkrpc\" (UID: \"11f82c23-a805-489a-b0ab-44d33cf336c1\") " pod="openshift-marketplace/community-operators-xkrpc" Nov 24 12:10:01 crc kubenswrapper[4789]: I1124 12:10:01.046553 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxwbj\" (UniqueName: \"kubernetes.io/projected/11f82c23-a805-489a-b0ab-44d33cf336c1-kube-api-access-kxwbj\") pod \"community-operators-xkrpc\" (UID: \"11f82c23-a805-489a-b0ab-44d33cf336c1\") " pod="openshift-marketplace/community-operators-xkrpc" Nov 24 12:10:01 crc kubenswrapper[4789]: I1124 12:10:01.046471 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11f82c23-a805-489a-b0ab-44d33cf336c1-catalog-content\") pod \"community-operators-xkrpc\" (UID: \"11f82c23-a805-489a-b0ab-44d33cf336c1\") " pod="openshift-marketplace/community-operators-xkrpc" Nov 24 12:10:01 crc kubenswrapper[4789]: I1124 12:10:01.045930 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11f82c23-a805-489a-b0ab-44d33cf336c1-utilities\") pod \"community-operators-xkrpc\" (UID: \"11f82c23-a805-489a-b0ab-44d33cf336c1\") " pod="openshift-marketplace/community-operators-xkrpc" Nov 24 12:10:01 crc kubenswrapper[4789]: I1124 12:10:01.066942 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxwbj\" (UniqueName: \"kubernetes.io/projected/11f82c23-a805-489a-b0ab-44d33cf336c1-kube-api-access-kxwbj\") pod \"community-operators-xkrpc\" (UID: \"11f82c23-a805-489a-b0ab-44d33cf336c1\") " pod="openshift-marketplace/community-operators-xkrpc" Nov 24 12:10:01 crc kubenswrapper[4789]: I1124 12:10:01.110405 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xkrpc" Nov 24 12:10:01 crc kubenswrapper[4789]: I1124 12:10:01.169235 4789 scope.go:117] "RemoveContainer" containerID="e0548ff4b57302caa6b7a362f06382ae8c3563988da3b37011e15cb6b4702acd" Nov 24 12:10:01 crc kubenswrapper[4789]: E1124 12:10:01.169593 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 12:10:01 crc kubenswrapper[4789]: I1124 12:10:01.790013 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xkrpc"] Nov 24 12:10:02 crc kubenswrapper[4789]: I1124 12:10:02.696631 4789 generic.go:334] "Generic (PLEG): container finished" podID="11f82c23-a805-489a-b0ab-44d33cf336c1" containerID="a3b6651ff95e88c8e527067465fd0201128474e3ed7c2de985f5dc534261d2cf" exitCode=0 Nov 24 12:10:02 crc kubenswrapper[4789]: I1124 12:10:02.696680 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xkrpc" event={"ID":"11f82c23-a805-489a-b0ab-44d33cf336c1","Type":"ContainerDied","Data":"a3b6651ff95e88c8e527067465fd0201128474e3ed7c2de985f5dc534261d2cf"} Nov 24 12:10:02 crc kubenswrapper[4789]: I1124 12:10:02.696932 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xkrpc" event={"ID":"11f82c23-a805-489a-b0ab-44d33cf336c1","Type":"ContainerStarted","Data":"4812bd6bf8b0866259b7874696880183d68fe43ba953057e1b4e841b18bf5439"} Nov 24 12:10:02 crc kubenswrapper[4789]: I1124 12:10:02.698344 4789 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 12:10:04 crc kubenswrapper[4789]: I1124 12:10:04.730549 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xkrpc" event={"ID":"11f82c23-a805-489a-b0ab-44d33cf336c1","Type":"ContainerStarted","Data":"59ee6293f30f0b04da0dbcd157ecbddad1ff59cf0b9f76e9f7b894a511b56ac8"} Nov 24 12:10:05 crc kubenswrapper[4789]: I1124 12:10:05.740327 4789 generic.go:334] "Generic (PLEG): container finished" podID="11f82c23-a805-489a-b0ab-44d33cf336c1" containerID="59ee6293f30f0b04da0dbcd157ecbddad1ff59cf0b9f76e9f7b894a511b56ac8" exitCode=0 Nov 24 12:10:05 crc kubenswrapper[4789]: I1124 12:10:05.740654 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xkrpc" event={"ID":"11f82c23-a805-489a-b0ab-44d33cf336c1","Type":"ContainerDied","Data":"59ee6293f30f0b04da0dbcd157ecbddad1ff59cf0b9f76e9f7b894a511b56ac8"} Nov 24 12:10:06 crc kubenswrapper[4789]: I1124 12:10:06.752787 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xkrpc" event={"ID":"11f82c23-a805-489a-b0ab-44d33cf336c1","Type":"ContainerStarted","Data":"a7affaabeb2b290ac84738d615c1bc2b4b1e6f6e74d47a1854904fe87a73448d"} Nov 24 12:10:06 crc kubenswrapper[4789]: I1124 12:10:06.778728 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xkrpc" podStartSLOduration=3.327795121 podStartE2EDuration="6.77870479s" podCreationTimestamp="2025-11-24 12:10:00 +0000 UTC" firstStartedPulling="2025-11-24 12:10:02.697994873 +0000 UTC m=+2385.280466252" lastFinishedPulling="2025-11-24 12:10:06.148904542 +0000 UTC m=+2388.731375921" observedRunningTime="2025-11-24 12:10:06.771206188 +0000 UTC m=+2389.353677567" watchObservedRunningTime="2025-11-24 12:10:06.77870479 +0000 UTC m=+2389.361176169" Nov 24 12:10:11 crc kubenswrapper[4789]: I1124 12:10:11.111130 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xkrpc" Nov 24 12:10:11 crc kubenswrapper[4789]: I1124 12:10:11.111438 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xkrpc" Nov 24 12:10:11 crc kubenswrapper[4789]: I1124 12:10:11.154112 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xkrpc" Nov 24 12:10:11 crc kubenswrapper[4789]: I1124 12:10:11.842643 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xkrpc" Nov 24 12:10:11 crc kubenswrapper[4789]: I1124 12:10:11.897286 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xkrpc"] Nov 24 12:10:13 crc kubenswrapper[4789]: I1124 12:10:13.169862 4789 scope.go:117] "RemoveContainer" containerID="e0548ff4b57302caa6b7a362f06382ae8c3563988da3b37011e15cb6b4702acd" Nov 24 12:10:13 crc kubenswrapper[4789]: E1124 12:10:13.171011 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 12:10:13 crc kubenswrapper[4789]: I1124 12:10:13.808516 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xkrpc" podUID="11f82c23-a805-489a-b0ab-44d33cf336c1" containerName="registry-server" containerID="cri-o://a7affaabeb2b290ac84738d615c1bc2b4b1e6f6e74d47a1854904fe87a73448d" gracePeriod=2 Nov 24 12:10:14 crc kubenswrapper[4789]: I1124 12:10:14.236793 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xkrpc" Nov 24 12:10:14 crc kubenswrapper[4789]: I1124 12:10:14.371615 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11f82c23-a805-489a-b0ab-44d33cf336c1-utilities\") pod \"11f82c23-a805-489a-b0ab-44d33cf336c1\" (UID: \"11f82c23-a805-489a-b0ab-44d33cf336c1\") " Nov 24 12:10:14 crc kubenswrapper[4789]: I1124 12:10:14.371819 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11f82c23-a805-489a-b0ab-44d33cf336c1-catalog-content\") pod \"11f82c23-a805-489a-b0ab-44d33cf336c1\" (UID: \"11f82c23-a805-489a-b0ab-44d33cf336c1\") " Nov 24 12:10:14 crc kubenswrapper[4789]: I1124 12:10:14.372162 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxwbj\" (UniqueName: \"kubernetes.io/projected/11f82c23-a805-489a-b0ab-44d33cf336c1-kube-api-access-kxwbj\") pod \"11f82c23-a805-489a-b0ab-44d33cf336c1\" (UID: \"11f82c23-a805-489a-b0ab-44d33cf336c1\") " Nov 24 12:10:14 crc kubenswrapper[4789]: I1124 12:10:14.372531 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11f82c23-a805-489a-b0ab-44d33cf336c1-utilities" (OuterVolumeSpecName: "utilities") pod "11f82c23-a805-489a-b0ab-44d33cf336c1" (UID: "11f82c23-a805-489a-b0ab-44d33cf336c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:10:14 crc kubenswrapper[4789]: I1124 12:10:14.377337 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11f82c23-a805-489a-b0ab-44d33cf336c1-kube-api-access-kxwbj" (OuterVolumeSpecName: "kube-api-access-kxwbj") pod "11f82c23-a805-489a-b0ab-44d33cf336c1" (UID: "11f82c23-a805-489a-b0ab-44d33cf336c1"). InnerVolumeSpecName "kube-api-access-kxwbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:10:14 crc kubenswrapper[4789]: I1124 12:10:14.434774 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11f82c23-a805-489a-b0ab-44d33cf336c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "11f82c23-a805-489a-b0ab-44d33cf336c1" (UID: "11f82c23-a805-489a-b0ab-44d33cf336c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:10:14 crc kubenswrapper[4789]: I1124 12:10:14.474505 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11f82c23-a805-489a-b0ab-44d33cf336c1-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:10:14 crc kubenswrapper[4789]: I1124 12:10:14.474542 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11f82c23-a805-489a-b0ab-44d33cf336c1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:10:14 crc kubenswrapper[4789]: I1124 12:10:14.474553 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxwbj\" (UniqueName: \"kubernetes.io/projected/11f82c23-a805-489a-b0ab-44d33cf336c1-kube-api-access-kxwbj\") on node \"crc\" DevicePath \"\"" Nov 24 12:10:14 crc kubenswrapper[4789]: I1124 12:10:14.821932 4789 generic.go:334] "Generic (PLEG): container finished" podID="11f82c23-a805-489a-b0ab-44d33cf336c1" containerID="a7affaabeb2b290ac84738d615c1bc2b4b1e6f6e74d47a1854904fe87a73448d" exitCode=0 Nov 24 12:10:14 crc kubenswrapper[4789]: I1124 12:10:14.822027 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xkrpc" Nov 24 12:10:14 crc kubenswrapper[4789]: I1124 12:10:14.822019 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xkrpc" event={"ID":"11f82c23-a805-489a-b0ab-44d33cf336c1","Type":"ContainerDied","Data":"a7affaabeb2b290ac84738d615c1bc2b4b1e6f6e74d47a1854904fe87a73448d"} Nov 24 12:10:14 crc kubenswrapper[4789]: I1124 12:10:14.822362 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xkrpc" event={"ID":"11f82c23-a805-489a-b0ab-44d33cf336c1","Type":"ContainerDied","Data":"4812bd6bf8b0866259b7874696880183d68fe43ba953057e1b4e841b18bf5439"} Nov 24 12:10:14 crc kubenswrapper[4789]: I1124 12:10:14.822386 4789 scope.go:117] "RemoveContainer" containerID="a7affaabeb2b290ac84738d615c1bc2b4b1e6f6e74d47a1854904fe87a73448d" Nov 24 12:10:14 crc kubenswrapper[4789]: I1124 12:10:14.847567 4789 scope.go:117] "RemoveContainer" containerID="59ee6293f30f0b04da0dbcd157ecbddad1ff59cf0b9f76e9f7b894a511b56ac8" Nov 24 12:10:14 crc kubenswrapper[4789]: I1124 12:10:14.863618 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xkrpc"] Nov 24 12:10:14 crc kubenswrapper[4789]: I1124 12:10:14.875659 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xkrpc"] Nov 24 12:10:14 crc kubenswrapper[4789]: I1124 12:10:14.879903 4789 scope.go:117] "RemoveContainer" containerID="a3b6651ff95e88c8e527067465fd0201128474e3ed7c2de985f5dc534261d2cf" Nov 24 12:10:14 crc kubenswrapper[4789]: I1124 12:10:14.915580 4789 scope.go:117] "RemoveContainer" containerID="a7affaabeb2b290ac84738d615c1bc2b4b1e6f6e74d47a1854904fe87a73448d" Nov 24 12:10:14 crc kubenswrapper[4789]: E1124 12:10:14.916353 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7affaabeb2b290ac84738d615c1bc2b4b1e6f6e74d47a1854904fe87a73448d\": container with ID starting with a7affaabeb2b290ac84738d615c1bc2b4b1e6f6e74d47a1854904fe87a73448d not found: ID does not exist" containerID="a7affaabeb2b290ac84738d615c1bc2b4b1e6f6e74d47a1854904fe87a73448d" Nov 24 12:10:14 crc kubenswrapper[4789]: I1124 12:10:14.916416 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7affaabeb2b290ac84738d615c1bc2b4b1e6f6e74d47a1854904fe87a73448d"} err="failed to get container status \"a7affaabeb2b290ac84738d615c1bc2b4b1e6f6e74d47a1854904fe87a73448d\": rpc error: code = NotFound desc = could not find container \"a7affaabeb2b290ac84738d615c1bc2b4b1e6f6e74d47a1854904fe87a73448d\": container with ID starting with a7affaabeb2b290ac84738d615c1bc2b4b1e6f6e74d47a1854904fe87a73448d not found: ID does not exist" Nov 24 12:10:14 crc kubenswrapper[4789]: I1124 12:10:14.916444 4789 scope.go:117] "RemoveContainer" containerID="59ee6293f30f0b04da0dbcd157ecbddad1ff59cf0b9f76e9f7b894a511b56ac8" Nov 24 12:10:14 crc kubenswrapper[4789]: E1124 12:10:14.916939 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59ee6293f30f0b04da0dbcd157ecbddad1ff59cf0b9f76e9f7b894a511b56ac8\": container with ID starting with 59ee6293f30f0b04da0dbcd157ecbddad1ff59cf0b9f76e9f7b894a511b56ac8 not found: ID does not exist" containerID="59ee6293f30f0b04da0dbcd157ecbddad1ff59cf0b9f76e9f7b894a511b56ac8" Nov 24 12:10:14 crc kubenswrapper[4789]: I1124 12:10:14.916976 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59ee6293f30f0b04da0dbcd157ecbddad1ff59cf0b9f76e9f7b894a511b56ac8"} err="failed to get container status \"59ee6293f30f0b04da0dbcd157ecbddad1ff59cf0b9f76e9f7b894a511b56ac8\": rpc error: code = NotFound desc = could not find container \"59ee6293f30f0b04da0dbcd157ecbddad1ff59cf0b9f76e9f7b894a511b56ac8\": container with ID starting with 59ee6293f30f0b04da0dbcd157ecbddad1ff59cf0b9f76e9f7b894a511b56ac8 not found: ID does not exist" Nov 24 12:10:14 crc kubenswrapper[4789]: I1124 12:10:14.917007 4789 scope.go:117] "RemoveContainer" containerID="a3b6651ff95e88c8e527067465fd0201128474e3ed7c2de985f5dc534261d2cf" Nov 24 12:10:14 crc kubenswrapper[4789]: E1124 12:10:14.917376 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3b6651ff95e88c8e527067465fd0201128474e3ed7c2de985f5dc534261d2cf\": container with ID starting with a3b6651ff95e88c8e527067465fd0201128474e3ed7c2de985f5dc534261d2cf not found: ID does not exist" containerID="a3b6651ff95e88c8e527067465fd0201128474e3ed7c2de985f5dc534261d2cf" Nov 24 12:10:14 crc kubenswrapper[4789]: I1124 12:10:14.917403 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3b6651ff95e88c8e527067465fd0201128474e3ed7c2de985f5dc534261d2cf"} err="failed to get container status \"a3b6651ff95e88c8e527067465fd0201128474e3ed7c2de985f5dc534261d2cf\": rpc error: code = NotFound desc = could not find container \"a3b6651ff95e88c8e527067465fd0201128474e3ed7c2de985f5dc534261d2cf\": container with ID starting with a3b6651ff95e88c8e527067465fd0201128474e3ed7c2de985f5dc534261d2cf not found: ID does not exist" Nov 24 12:10:16 crc kubenswrapper[4789]: I1124 12:10:16.179569 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11f82c23-a805-489a-b0ab-44d33cf336c1" path="/var/lib/kubelet/pods/11f82c23-a805-489a-b0ab-44d33cf336c1/volumes" Nov 24 12:10:23 crc kubenswrapper[4789]: I1124 12:10:23.250192 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fwt5j"] Nov 24 12:10:23 crc kubenswrapper[4789]: E1124 12:10:23.250999 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11f82c23-a805-489a-b0ab-44d33cf336c1" containerName="extract-utilities" Nov 24 12:10:23 crc kubenswrapper[4789]: I1124 12:10:23.251019 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="11f82c23-a805-489a-b0ab-44d33cf336c1" containerName="extract-utilities" Nov 24 12:10:23 crc kubenswrapper[4789]: E1124 12:10:23.251045 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11f82c23-a805-489a-b0ab-44d33cf336c1" containerName="registry-server" Nov 24 12:10:23 crc kubenswrapper[4789]: I1124 12:10:23.251053 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="11f82c23-a805-489a-b0ab-44d33cf336c1" containerName="registry-server" Nov 24 12:10:23 crc kubenswrapper[4789]: E1124 12:10:23.251071 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11f82c23-a805-489a-b0ab-44d33cf336c1" containerName="extract-content" Nov 24 12:10:23 crc kubenswrapper[4789]: I1124 12:10:23.251080 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="11f82c23-a805-489a-b0ab-44d33cf336c1" containerName="extract-content" Nov 24 12:10:23 crc kubenswrapper[4789]: I1124 12:10:23.251333 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="11f82c23-a805-489a-b0ab-44d33cf336c1" containerName="registry-server" Nov 24 12:10:23 crc kubenswrapper[4789]: I1124 12:10:23.252997 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fwt5j" Nov 24 12:10:23 crc kubenswrapper[4789]: I1124 12:10:23.282977 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fwt5j"] Nov 24 12:10:23 crc kubenswrapper[4789]: I1124 12:10:23.361720 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/533852c3-4145-4c16-a6cb-a5e9f87ce5f7-catalog-content\") pod \"certified-operators-fwt5j\" (UID: \"533852c3-4145-4c16-a6cb-a5e9f87ce5f7\") " pod="openshift-marketplace/certified-operators-fwt5j" Nov 24 12:10:23 crc kubenswrapper[4789]: I1124 12:10:23.361989 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/533852c3-4145-4c16-a6cb-a5e9f87ce5f7-utilities\") pod \"certified-operators-fwt5j\" (UID: \"533852c3-4145-4c16-a6cb-a5e9f87ce5f7\") " pod="openshift-marketplace/certified-operators-fwt5j" Nov 24 12:10:23 crc kubenswrapper[4789]: I1124 12:10:23.362215 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cldlf\" (UniqueName: \"kubernetes.io/projected/533852c3-4145-4c16-a6cb-a5e9f87ce5f7-kube-api-access-cldlf\") pod \"certified-operators-fwt5j\" (UID: \"533852c3-4145-4c16-a6cb-a5e9f87ce5f7\") " pod="openshift-marketplace/certified-operators-fwt5j" Nov 24 12:10:23 crc kubenswrapper[4789]: I1124 12:10:23.464613 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/533852c3-4145-4c16-a6cb-a5e9f87ce5f7-catalog-content\") pod \"certified-operators-fwt5j\" (UID: \"533852c3-4145-4c16-a6cb-a5e9f87ce5f7\") " pod="openshift-marketplace/certified-operators-fwt5j" Nov 24 12:10:23 crc kubenswrapper[4789]: I1124 12:10:23.464658 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/533852c3-4145-4c16-a6cb-a5e9f87ce5f7-utilities\") pod \"certified-operators-fwt5j\" (UID: \"533852c3-4145-4c16-a6cb-a5e9f87ce5f7\") " pod="openshift-marketplace/certified-operators-fwt5j" Nov 24 12:10:23 crc kubenswrapper[4789]: I1124 12:10:23.464715 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cldlf\" (UniqueName: \"kubernetes.io/projected/533852c3-4145-4c16-a6cb-a5e9f87ce5f7-kube-api-access-cldlf\") pod \"certified-operators-fwt5j\" (UID: \"533852c3-4145-4c16-a6cb-a5e9f87ce5f7\") " pod="openshift-marketplace/certified-operators-fwt5j" Nov 24 12:10:23 crc kubenswrapper[4789]: I1124 12:10:23.465553 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/533852c3-4145-4c16-a6cb-a5e9f87ce5f7-utilities\") pod \"certified-operators-fwt5j\" (UID: \"533852c3-4145-4c16-a6cb-a5e9f87ce5f7\") " pod="openshift-marketplace/certified-operators-fwt5j" Nov 24 12:10:23 crc kubenswrapper[4789]: I1124 12:10:23.465934 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/533852c3-4145-4c16-a6cb-a5e9f87ce5f7-catalog-content\") pod \"certified-operators-fwt5j\" (UID: \"533852c3-4145-4c16-a6cb-a5e9f87ce5f7\") " pod="openshift-marketplace/certified-operators-fwt5j" Nov 24 12:10:23 crc kubenswrapper[4789]: I1124 12:10:23.491683 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cldlf\" (UniqueName: \"kubernetes.io/projected/533852c3-4145-4c16-a6cb-a5e9f87ce5f7-kube-api-access-cldlf\") pod \"certified-operators-fwt5j\" (UID: \"533852c3-4145-4c16-a6cb-a5e9f87ce5f7\") " pod="openshift-marketplace/certified-operators-fwt5j" Nov 24 12:10:23 crc kubenswrapper[4789]: I1124 12:10:23.584829 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fwt5j" Nov 24 12:10:23 crc kubenswrapper[4789]: I1124 12:10:23.988064 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fwt5j"] Nov 24 12:10:24 crc kubenswrapper[4789]: I1124 12:10:24.169323 4789 scope.go:117] "RemoveContainer" containerID="e0548ff4b57302caa6b7a362f06382ae8c3563988da3b37011e15cb6b4702acd" Nov 24 12:10:24 crc kubenswrapper[4789]: E1124 12:10:24.169798 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 12:10:24 crc kubenswrapper[4789]: I1124 12:10:24.901579 4789 generic.go:334] "Generic (PLEG): container finished" podID="533852c3-4145-4c16-a6cb-a5e9f87ce5f7" containerID="06d220a45e98d3db036ea650405ffacf2288965ecc97fede6c2108a0fd7ba192" exitCode=0 Nov 24 12:10:24 crc kubenswrapper[4789]: I1124 12:10:24.901888 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fwt5j" event={"ID":"533852c3-4145-4c16-a6cb-a5e9f87ce5f7","Type":"ContainerDied","Data":"06d220a45e98d3db036ea650405ffacf2288965ecc97fede6c2108a0fd7ba192"} Nov 24 12:10:24 crc kubenswrapper[4789]: I1124 12:10:24.901919 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fwt5j" event={"ID":"533852c3-4145-4c16-a6cb-a5e9f87ce5f7","Type":"ContainerStarted","Data":"4a554f75f09f90d28f0d762eb8510e043b20682496d337399085cbb33c1aca86"} Nov 24 12:10:25 crc kubenswrapper[4789]: I1124 12:10:25.913661 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fwt5j" event={"ID":"533852c3-4145-4c16-a6cb-a5e9f87ce5f7","Type":"ContainerStarted","Data":"1b98d1e3c9b231bb24a82cf09e4880145d91e56cd579782c873ee053def15e8e"} Nov 24 12:10:27 crc kubenswrapper[4789]: I1124 12:10:27.935172 4789 generic.go:334] "Generic (PLEG): container finished" podID="533852c3-4145-4c16-a6cb-a5e9f87ce5f7" containerID="1b98d1e3c9b231bb24a82cf09e4880145d91e56cd579782c873ee053def15e8e" exitCode=0 Nov 24 12:10:27 crc kubenswrapper[4789]: I1124 12:10:27.935254 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fwt5j" event={"ID":"533852c3-4145-4c16-a6cb-a5e9f87ce5f7","Type":"ContainerDied","Data":"1b98d1e3c9b231bb24a82cf09e4880145d91e56cd579782c873ee053def15e8e"} Nov 24 12:10:28 crc kubenswrapper[4789]: I1124 12:10:28.946785 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fwt5j" event={"ID":"533852c3-4145-4c16-a6cb-a5e9f87ce5f7","Type":"ContainerStarted","Data":"07cd4a18611e55df0bbff68cc7fa2dbb8f3a8ab9a67ed11c5cda3abb4ecef9d9"} Nov 24 12:10:28 crc kubenswrapper[4789]: I1124 12:10:28.973693 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fwt5j" podStartSLOduration=2.195076167 podStartE2EDuration="5.973669193s" podCreationTimestamp="2025-11-24 12:10:23 +0000 UTC" firstStartedPulling="2025-11-24 12:10:24.904249871 +0000 UTC m=+2407.486721260" lastFinishedPulling="2025-11-24 12:10:28.682842907 +0000 UTC m=+2411.265314286" observedRunningTime="2025-11-24 12:10:28.966713193 +0000 UTC m=+2411.549184572" watchObservedRunningTime="2025-11-24 12:10:28.973669193 +0000 UTC m=+2411.556140572" Nov 24 12:10:33 crc kubenswrapper[4789]: I1124 12:10:33.585998 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fwt5j" Nov 24 12:10:33 crc kubenswrapper[4789]: I1124 12:10:33.586506 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fwt5j" Nov 24 12:10:33 crc kubenswrapper[4789]: I1124 12:10:33.644174 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fwt5j" Nov 24 12:10:34 crc kubenswrapper[4789]: I1124 12:10:34.040934 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fwt5j" Nov 24 12:10:34 crc kubenswrapper[4789]: I1124 12:10:34.090948 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fwt5j"] Nov 24 12:10:36 crc kubenswrapper[4789]: I1124 12:10:36.001309 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fwt5j" podUID="533852c3-4145-4c16-a6cb-a5e9f87ce5f7" containerName="registry-server" containerID="cri-o://07cd4a18611e55df0bbff68cc7fa2dbb8f3a8ab9a67ed11c5cda3abb4ecef9d9" gracePeriod=2 Nov 24 12:10:36 crc kubenswrapper[4789]: I1124 12:10:36.449242 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fwt5j" Nov 24 12:10:36 crc kubenswrapper[4789]: I1124 12:10:36.516172 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/533852c3-4145-4c16-a6cb-a5e9f87ce5f7-utilities\") pod \"533852c3-4145-4c16-a6cb-a5e9f87ce5f7\" (UID: \"533852c3-4145-4c16-a6cb-a5e9f87ce5f7\") " Nov 24 12:10:36 crc kubenswrapper[4789]: I1124 12:10:36.516258 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/533852c3-4145-4c16-a6cb-a5e9f87ce5f7-catalog-content\") pod \"533852c3-4145-4c16-a6cb-a5e9f87ce5f7\" (UID: \"533852c3-4145-4c16-a6cb-a5e9f87ce5f7\") " Nov 24 12:10:36 crc kubenswrapper[4789]: I1124 12:10:36.516661 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cldlf\" (UniqueName: \"kubernetes.io/projected/533852c3-4145-4c16-a6cb-a5e9f87ce5f7-kube-api-access-cldlf\") pod \"533852c3-4145-4c16-a6cb-a5e9f87ce5f7\" (UID: \"533852c3-4145-4c16-a6cb-a5e9f87ce5f7\") " Nov 24 12:10:36 crc kubenswrapper[4789]: I1124 12:10:36.530225 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/533852c3-4145-4c16-a6cb-a5e9f87ce5f7-kube-api-access-cldlf" (OuterVolumeSpecName: "kube-api-access-cldlf") pod "533852c3-4145-4c16-a6cb-a5e9f87ce5f7" (UID: "533852c3-4145-4c16-a6cb-a5e9f87ce5f7"). InnerVolumeSpecName "kube-api-access-cldlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:10:36 crc kubenswrapper[4789]: I1124 12:10:36.530813 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/533852c3-4145-4c16-a6cb-a5e9f87ce5f7-utilities" (OuterVolumeSpecName: "utilities") pod "533852c3-4145-4c16-a6cb-a5e9f87ce5f7" (UID: "533852c3-4145-4c16-a6cb-a5e9f87ce5f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:10:36 crc kubenswrapper[4789]: I1124 12:10:36.619735 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cldlf\" (UniqueName: \"kubernetes.io/projected/533852c3-4145-4c16-a6cb-a5e9f87ce5f7-kube-api-access-cldlf\") on node \"crc\" DevicePath \"\"" Nov 24 12:10:36 crc kubenswrapper[4789]: I1124 12:10:36.624265 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/533852c3-4145-4c16-a6cb-a5e9f87ce5f7-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:10:37 crc kubenswrapper[4789]: I1124 12:10:37.030409 4789 generic.go:334] "Generic (PLEG): container finished" podID="533852c3-4145-4c16-a6cb-a5e9f87ce5f7" containerID="07cd4a18611e55df0bbff68cc7fa2dbb8f3a8ab9a67ed11c5cda3abb4ecef9d9" exitCode=0 Nov 24 12:10:37 crc kubenswrapper[4789]: I1124 12:10:37.030455 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fwt5j" Nov 24 12:10:37 crc kubenswrapper[4789]: I1124 12:10:37.030478 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fwt5j" event={"ID":"533852c3-4145-4c16-a6cb-a5e9f87ce5f7","Type":"ContainerDied","Data":"07cd4a18611e55df0bbff68cc7fa2dbb8f3a8ab9a67ed11c5cda3abb4ecef9d9"} Nov 24 12:10:37 crc kubenswrapper[4789]: I1124 12:10:37.031910 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fwt5j" event={"ID":"533852c3-4145-4c16-a6cb-a5e9f87ce5f7","Type":"ContainerDied","Data":"4a554f75f09f90d28f0d762eb8510e043b20682496d337399085cbb33c1aca86"} Nov 24 12:10:37 crc kubenswrapper[4789]: I1124 12:10:37.031933 4789 scope.go:117] "RemoveContainer" containerID="07cd4a18611e55df0bbff68cc7fa2dbb8f3a8ab9a67ed11c5cda3abb4ecef9d9" Nov 24 12:10:37 crc kubenswrapper[4789]: I1124 12:10:37.062748 4789 scope.go:117] "RemoveContainer" containerID="1b98d1e3c9b231bb24a82cf09e4880145d91e56cd579782c873ee053def15e8e" Nov 24 12:10:37 crc kubenswrapper[4789]: I1124 12:10:37.083213 4789 scope.go:117] "RemoveContainer" containerID="06d220a45e98d3db036ea650405ffacf2288965ecc97fede6c2108a0fd7ba192" Nov 24 12:10:37 crc kubenswrapper[4789]: I1124 12:10:37.141252 4789 scope.go:117] "RemoveContainer" containerID="07cd4a18611e55df0bbff68cc7fa2dbb8f3a8ab9a67ed11c5cda3abb4ecef9d9" Nov 24 12:10:37 crc kubenswrapper[4789]: E1124 12:10:37.141730 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07cd4a18611e55df0bbff68cc7fa2dbb8f3a8ab9a67ed11c5cda3abb4ecef9d9\": container with ID starting with 07cd4a18611e55df0bbff68cc7fa2dbb8f3a8ab9a67ed11c5cda3abb4ecef9d9 not found: ID does not exist" containerID="07cd4a18611e55df0bbff68cc7fa2dbb8f3a8ab9a67ed11c5cda3abb4ecef9d9" Nov 24 12:10:37 crc kubenswrapper[4789]: I1124 12:10:37.141860 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07cd4a18611e55df0bbff68cc7fa2dbb8f3a8ab9a67ed11c5cda3abb4ecef9d9"} err="failed to get container status \"07cd4a18611e55df0bbff68cc7fa2dbb8f3a8ab9a67ed11c5cda3abb4ecef9d9\": rpc error: code = NotFound desc = could not find container \"07cd4a18611e55df0bbff68cc7fa2dbb8f3a8ab9a67ed11c5cda3abb4ecef9d9\": container with ID starting with 07cd4a18611e55df0bbff68cc7fa2dbb8f3a8ab9a67ed11c5cda3abb4ecef9d9 not found: ID does not exist" Nov 24 12:10:37 crc kubenswrapper[4789]: I1124 12:10:37.141953 4789 scope.go:117] "RemoveContainer" containerID="1b98d1e3c9b231bb24a82cf09e4880145d91e56cd579782c873ee053def15e8e" Nov 24 12:10:37 crc kubenswrapper[4789]: E1124 12:10:37.142314 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b98d1e3c9b231bb24a82cf09e4880145d91e56cd579782c873ee053def15e8e\": container with ID starting with 1b98d1e3c9b231bb24a82cf09e4880145d91e56cd579782c873ee053def15e8e not found: ID does not exist" containerID="1b98d1e3c9b231bb24a82cf09e4880145d91e56cd579782c873ee053def15e8e" Nov 24 12:10:37 crc kubenswrapper[4789]: I1124 12:10:37.142365 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b98d1e3c9b231bb24a82cf09e4880145d91e56cd579782c873ee053def15e8e"} err="failed to get container status \"1b98d1e3c9b231bb24a82cf09e4880145d91e56cd579782c873ee053def15e8e\": rpc error: code = NotFound desc = could not find container \"1b98d1e3c9b231bb24a82cf09e4880145d91e56cd579782c873ee053def15e8e\": container with ID starting with 1b98d1e3c9b231bb24a82cf09e4880145d91e56cd579782c873ee053def15e8e not found: ID does not exist" Nov 24 12:10:37 crc kubenswrapper[4789]: I1124 12:10:37.142392 4789 scope.go:117] "RemoveContainer" containerID="06d220a45e98d3db036ea650405ffacf2288965ecc97fede6c2108a0fd7ba192" Nov 24 12:10:37 crc kubenswrapper[4789]: E1124 12:10:37.142911 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06d220a45e98d3db036ea650405ffacf2288965ecc97fede6c2108a0fd7ba192\": container with ID starting with 06d220a45e98d3db036ea650405ffacf2288965ecc97fede6c2108a0fd7ba192 not found: ID does not exist" containerID="06d220a45e98d3db036ea650405ffacf2288965ecc97fede6c2108a0fd7ba192" Nov 24 12:10:37 crc kubenswrapper[4789]: I1124 12:10:37.142940 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06d220a45e98d3db036ea650405ffacf2288965ecc97fede6c2108a0fd7ba192"} err="failed to get container status \"06d220a45e98d3db036ea650405ffacf2288965ecc97fede6c2108a0fd7ba192\": rpc error: code = NotFound desc = could not find container \"06d220a45e98d3db036ea650405ffacf2288965ecc97fede6c2108a0fd7ba192\": container with ID starting with 06d220a45e98d3db036ea650405ffacf2288965ecc97fede6c2108a0fd7ba192 not found: ID does not exist" Nov 24 12:10:37 crc kubenswrapper[4789]: I1124 12:10:37.169363 4789 scope.go:117] "RemoveContainer" containerID="e0548ff4b57302caa6b7a362f06382ae8c3563988da3b37011e15cb6b4702acd" Nov 24 12:10:37 crc kubenswrapper[4789]: E1124 12:10:37.169740 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 12:10:37 crc kubenswrapper[4789]: I1124 12:10:37.207649 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/533852c3-4145-4c16-a6cb-a5e9f87ce5f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "533852c3-4145-4c16-a6cb-a5e9f87ce5f7" (UID: "533852c3-4145-4c16-a6cb-a5e9f87ce5f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:10:37 crc kubenswrapper[4789]: I1124 12:10:37.238021 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/533852c3-4145-4c16-a6cb-a5e9f87ce5f7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:10:37 crc kubenswrapper[4789]: I1124 12:10:37.371650 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fwt5j"] Nov 24 12:10:37 crc kubenswrapper[4789]: I1124 12:10:37.382401 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fwt5j"] Nov 24 12:10:38 crc kubenswrapper[4789]: I1124 12:10:38.179536 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="533852c3-4145-4c16-a6cb-a5e9f87ce5f7" path="/var/lib/kubelet/pods/533852c3-4145-4c16-a6cb-a5e9f87ce5f7/volumes" Nov 24 12:10:47 crc kubenswrapper[4789]: I1124 12:10:47.120631 4789 generic.go:334] "Generic (PLEG): container finished" podID="a00101c3-23f4-4180-b2f3-e601ba7afb4f" containerID="e42811f688385c96b02a2e92983a5a69b85338a27e7eb0a7e6da8c77daaf14e0" exitCode=0 Nov 24 12:10:47 crc kubenswrapper[4789]: I1124 12:10:47.122139 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bk76d/must-gather-nc82q" event={"ID":"a00101c3-23f4-4180-b2f3-e601ba7afb4f","Type":"ContainerDied","Data":"e42811f688385c96b02a2e92983a5a69b85338a27e7eb0a7e6da8c77daaf14e0"} Nov 24 12:10:47 crc kubenswrapper[4789]: I1124 12:10:47.123084 4789 scope.go:117] "RemoveContainer" containerID="e42811f688385c96b02a2e92983a5a69b85338a27e7eb0a7e6da8c77daaf14e0" Nov 24 12:10:47 crc kubenswrapper[4789]: I1124 12:10:47.991357 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bk76d_must-gather-nc82q_a00101c3-23f4-4180-b2f3-e601ba7afb4f/gather/0.log" Nov 24 12:10:49 crc kubenswrapper[4789]: I1124 12:10:49.169795 4789 scope.go:117] "RemoveContainer" containerID="e0548ff4b57302caa6b7a362f06382ae8c3563988da3b37011e15cb6b4702acd" Nov 24 12:10:49 crc kubenswrapper[4789]: E1124 12:10:49.170029 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 12:10:56 crc kubenswrapper[4789]: I1124 12:10:56.525326 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bk76d/must-gather-nc82q"] Nov 24 12:10:56 crc kubenswrapper[4789]: I1124 12:10:56.528958 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-bk76d/must-gather-nc82q" podUID="a00101c3-23f4-4180-b2f3-e601ba7afb4f" containerName="copy" containerID="cri-o://34732292cfb98294bf6c7330846136dabc843dc22c4670af4bd13876aaeadb0f" gracePeriod=2 Nov 24 12:10:56 crc kubenswrapper[4789]: I1124 12:10:56.532923 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bk76d/must-gather-nc82q"] Nov 24 12:10:57 crc kubenswrapper[4789]: I1124 12:10:57.118171 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bk76d_must-gather-nc82q_a00101c3-23f4-4180-b2f3-e601ba7afb4f/copy/0.log" Nov 24 12:10:57 crc kubenswrapper[4789]: I1124 12:10:57.118844 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bk76d/must-gather-nc82q" Nov 24 12:10:57 crc kubenswrapper[4789]: I1124 12:10:57.213893 4789 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bk76d_must-gather-nc82q_a00101c3-23f4-4180-b2f3-e601ba7afb4f/copy/0.log" Nov 24 12:10:57 crc kubenswrapper[4789]: I1124 12:10:57.214629 4789 generic.go:334] "Generic (PLEG): container finished" podID="a00101c3-23f4-4180-b2f3-e601ba7afb4f" containerID="34732292cfb98294bf6c7330846136dabc843dc22c4670af4bd13876aaeadb0f" exitCode=143 Nov 24 12:10:57 crc kubenswrapper[4789]: I1124 12:10:57.214703 4789 scope.go:117] "RemoveContainer" containerID="34732292cfb98294bf6c7330846136dabc843dc22c4670af4bd13876aaeadb0f" Nov 24 12:10:57 crc kubenswrapper[4789]: I1124 12:10:57.214818 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bk76d/must-gather-nc82q" Nov 24 12:10:57 crc kubenswrapper[4789]: I1124 12:10:57.235496 4789 scope.go:117] "RemoveContainer" containerID="e42811f688385c96b02a2e92983a5a69b85338a27e7eb0a7e6da8c77daaf14e0" Nov 24 12:10:57 crc kubenswrapper[4789]: I1124 12:10:57.236603 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a00101c3-23f4-4180-b2f3-e601ba7afb4f-must-gather-output\") pod \"a00101c3-23f4-4180-b2f3-e601ba7afb4f\" (UID: \"a00101c3-23f4-4180-b2f3-e601ba7afb4f\") " Nov 24 12:10:57 crc kubenswrapper[4789]: I1124 12:10:57.236755 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2qzd\" (UniqueName: \"kubernetes.io/projected/a00101c3-23f4-4180-b2f3-e601ba7afb4f-kube-api-access-p2qzd\") pod \"a00101c3-23f4-4180-b2f3-e601ba7afb4f\" (UID: \"a00101c3-23f4-4180-b2f3-e601ba7afb4f\") " Nov 24 12:10:57 crc kubenswrapper[4789]: I1124 12:10:57.242197 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a00101c3-23f4-4180-b2f3-e601ba7afb4f-kube-api-access-p2qzd" (OuterVolumeSpecName: "kube-api-access-p2qzd") pod "a00101c3-23f4-4180-b2f3-e601ba7afb4f" (UID: "a00101c3-23f4-4180-b2f3-e601ba7afb4f"). InnerVolumeSpecName "kube-api-access-p2qzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:10:57 crc kubenswrapper[4789]: I1124 12:10:57.338895 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2qzd\" (UniqueName: \"kubernetes.io/projected/a00101c3-23f4-4180-b2f3-e601ba7afb4f-kube-api-access-p2qzd\") on node \"crc\" DevicePath \"\"" Nov 24 12:10:57 crc kubenswrapper[4789]: I1124 12:10:57.357594 4789 scope.go:117] "RemoveContainer" containerID="34732292cfb98294bf6c7330846136dabc843dc22c4670af4bd13876aaeadb0f" Nov 24 12:10:57 crc kubenswrapper[4789]: E1124 12:10:57.358234 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34732292cfb98294bf6c7330846136dabc843dc22c4670af4bd13876aaeadb0f\": container with ID starting with 34732292cfb98294bf6c7330846136dabc843dc22c4670af4bd13876aaeadb0f not found: ID does not exist" containerID="34732292cfb98294bf6c7330846136dabc843dc22c4670af4bd13876aaeadb0f" Nov 24 12:10:57 crc kubenswrapper[4789]: I1124 12:10:57.359090 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34732292cfb98294bf6c7330846136dabc843dc22c4670af4bd13876aaeadb0f"} err="failed to get container status \"34732292cfb98294bf6c7330846136dabc843dc22c4670af4bd13876aaeadb0f\": rpc error: code = NotFound desc = could not find container \"34732292cfb98294bf6c7330846136dabc843dc22c4670af4bd13876aaeadb0f\": container with ID starting with 34732292cfb98294bf6c7330846136dabc843dc22c4670af4bd13876aaeadb0f not found: ID does not exist" Nov 24 12:10:57 crc kubenswrapper[4789]: I1124 12:10:57.359137 4789 scope.go:117] "RemoveContainer" containerID="e42811f688385c96b02a2e92983a5a69b85338a27e7eb0a7e6da8c77daaf14e0" Nov 24 12:10:57 crc kubenswrapper[4789]: E1124 12:10:57.360910 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e42811f688385c96b02a2e92983a5a69b85338a27e7eb0a7e6da8c77daaf14e0\": container with ID starting with e42811f688385c96b02a2e92983a5a69b85338a27e7eb0a7e6da8c77daaf14e0 not found: ID does not exist" containerID="e42811f688385c96b02a2e92983a5a69b85338a27e7eb0a7e6da8c77daaf14e0" Nov 24 12:10:57 crc kubenswrapper[4789]: I1124 12:10:57.360969 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e42811f688385c96b02a2e92983a5a69b85338a27e7eb0a7e6da8c77daaf14e0"} err="failed to get container status \"e42811f688385c96b02a2e92983a5a69b85338a27e7eb0a7e6da8c77daaf14e0\": rpc error: code = NotFound desc = could not find container \"e42811f688385c96b02a2e92983a5a69b85338a27e7eb0a7e6da8c77daaf14e0\": container with ID starting with e42811f688385c96b02a2e92983a5a69b85338a27e7eb0a7e6da8c77daaf14e0 not found: ID does not exist" Nov 24 12:10:57 crc kubenswrapper[4789]: I1124 12:10:57.396347 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a00101c3-23f4-4180-b2f3-e601ba7afb4f-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "a00101c3-23f4-4180-b2f3-e601ba7afb4f" (UID: "a00101c3-23f4-4180-b2f3-e601ba7afb4f"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:10:57 crc kubenswrapper[4789]: I1124 12:10:57.440751 4789 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a00101c3-23f4-4180-b2f3-e601ba7afb4f-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 24 12:10:58 crc kubenswrapper[4789]: I1124 12:10:58.179929 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a00101c3-23f4-4180-b2f3-e601ba7afb4f" path="/var/lib/kubelet/pods/a00101c3-23f4-4180-b2f3-e601ba7afb4f/volumes" Nov 24 12:11:04 crc kubenswrapper[4789]: I1124 12:11:04.169741 4789 scope.go:117] "RemoveContainer" containerID="e0548ff4b57302caa6b7a362f06382ae8c3563988da3b37011e15cb6b4702acd" Nov 24 12:11:04 crc kubenswrapper[4789]: E1124 12:11:04.170586 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 12:11:16 crc kubenswrapper[4789]: I1124 12:11:16.169311 4789 scope.go:117] "RemoveContainer" containerID="e0548ff4b57302caa6b7a362f06382ae8c3563988da3b37011e15cb6b4702acd" Nov 24 12:11:16 crc kubenswrapper[4789]: E1124 12:11:16.170075 4789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9czvn_openshift-machine-config-operator(30c4a832-f0e4-481b-a474-3ecea86049f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" Nov 24 12:11:29 crc kubenswrapper[4789]: I1124 12:11:29.169758 4789 scope.go:117] "RemoveContainer" containerID="e0548ff4b57302caa6b7a362f06382ae8c3563988da3b37011e15cb6b4702acd" Nov 24 12:11:29 crc kubenswrapper[4789]: I1124 12:11:29.520548 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" event={"ID":"30c4a832-f0e4-481b-a474-3ecea86049f6","Type":"ContainerStarted","Data":"04f24f8d27e145f3a39a418682f30fe82f3f89a820d4c39ded56c784747c1358"} Nov 24 12:12:34 crc kubenswrapper[4789]: I1124 12:12:34.250122 4789 scope.go:117] "RemoveContainer" containerID="389d969339073445e4e34f28fa362d200f598199162672006e0856648172130e" Nov 24 12:13:50 crc kubenswrapper[4789]: I1124 12:13:50.162107 4789 patch_prober.go:28] interesting pod/machine-config-daemon-9czvn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:13:50 crc kubenswrapper[4789]: I1124 12:13:50.162659 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:14:20 crc kubenswrapper[4789]: I1124 12:14:20.162321 4789 patch_prober.go:28] interesting pod/machine-config-daemon-9czvn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:14:20 crc kubenswrapper[4789]: I1124 12:14:20.162992 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:14:50 crc kubenswrapper[4789]: I1124 12:14:50.162707 4789 patch_prober.go:28] interesting pod/machine-config-daemon-9czvn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:14:50 crc kubenswrapper[4789]: I1124 12:14:50.163369 4789 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:14:50 crc kubenswrapper[4789]: I1124 12:14:50.163419 4789 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" Nov 24 12:14:50 crc kubenswrapper[4789]: I1124 12:14:50.164536 4789 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"04f24f8d27e145f3a39a418682f30fe82f3f89a820d4c39ded56c784747c1358"} pod="openshift-machine-config-operator/machine-config-daemon-9czvn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:14:50 crc kubenswrapper[4789]: I1124 12:14:50.164632 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" podUID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerName="machine-config-daemon" containerID="cri-o://04f24f8d27e145f3a39a418682f30fe82f3f89a820d4c39ded56c784747c1358" gracePeriod=600 Nov 24 12:14:51 crc kubenswrapper[4789]: I1124 12:14:51.273758 4789 generic.go:334] "Generic (PLEG): container finished" podID="30c4a832-f0e4-481b-a474-3ecea86049f6" containerID="04f24f8d27e145f3a39a418682f30fe82f3f89a820d4c39ded56c784747c1358" exitCode=0 Nov 24 12:14:51 crc kubenswrapper[4789]: I1124 12:14:51.274029 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" event={"ID":"30c4a832-f0e4-481b-a474-3ecea86049f6","Type":"ContainerDied","Data":"04f24f8d27e145f3a39a418682f30fe82f3f89a820d4c39ded56c784747c1358"} Nov 24 12:14:51 crc kubenswrapper[4789]: I1124 12:14:51.274376 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9czvn" event={"ID":"30c4a832-f0e4-481b-a474-3ecea86049f6","Type":"ContainerStarted","Data":"3cb67348da05f017a738c1be7028fd9462acf93e3dbcc71ffd541613196fbc0b"} Nov 24 12:14:51 crc kubenswrapper[4789]: I1124 12:14:51.274397 4789 scope.go:117] "RemoveContainer" containerID="e0548ff4b57302caa6b7a362f06382ae8c3563988da3b37011e15cb6b4702acd" Nov 24 12:15:00 crc kubenswrapper[4789]: I1124 12:15:00.145769 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399775-kdjrk"] Nov 24 12:15:00 crc kubenswrapper[4789]: E1124 12:15:00.146538 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="533852c3-4145-4c16-a6cb-a5e9f87ce5f7" containerName="registry-server" Nov 24 12:15:00 crc kubenswrapper[4789]: I1124 12:15:00.146551 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="533852c3-4145-4c16-a6cb-a5e9f87ce5f7" containerName="registry-server" Nov 24 12:15:00 crc kubenswrapper[4789]: E1124 12:15:00.146595 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a00101c3-23f4-4180-b2f3-e601ba7afb4f" containerName="copy" Nov 24 12:15:00 crc kubenswrapper[4789]: I1124 12:15:00.146600 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="a00101c3-23f4-4180-b2f3-e601ba7afb4f" containerName="copy" Nov 24 12:15:00 crc kubenswrapper[4789]: E1124 12:15:00.146610 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="533852c3-4145-4c16-a6cb-a5e9f87ce5f7" containerName="extract-content" Nov 24 12:15:00 crc kubenswrapper[4789]: I1124 12:15:00.146616 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="533852c3-4145-4c16-a6cb-a5e9f87ce5f7" containerName="extract-content" Nov 24 12:15:00 crc kubenswrapper[4789]: E1124 12:15:00.146651 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="533852c3-4145-4c16-a6cb-a5e9f87ce5f7" containerName="extract-utilities" Nov 24 12:15:00 crc kubenswrapper[4789]: I1124 12:15:00.146657 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="533852c3-4145-4c16-a6cb-a5e9f87ce5f7" containerName="extract-utilities" Nov 24 12:15:00 crc kubenswrapper[4789]: E1124 12:15:00.146669 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a00101c3-23f4-4180-b2f3-e601ba7afb4f" containerName="gather" Nov 24 12:15:00 crc kubenswrapper[4789]: I1124 12:15:00.146674 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="a00101c3-23f4-4180-b2f3-e601ba7afb4f" containerName="gather" Nov 24 12:15:00 crc kubenswrapper[4789]: I1124 12:15:00.146857 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="a00101c3-23f4-4180-b2f3-e601ba7afb4f" containerName="gather" Nov 24 12:15:00 crc kubenswrapper[4789]: I1124 12:15:00.146871 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="533852c3-4145-4c16-a6cb-a5e9f87ce5f7" containerName="registry-server" Nov 24 12:15:00 crc kubenswrapper[4789]: I1124 12:15:00.146882 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="a00101c3-23f4-4180-b2f3-e601ba7afb4f" containerName="copy" Nov 24 12:15:00 crc kubenswrapper[4789]: I1124 12:15:00.147438 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-kdjrk" Nov 24 12:15:00 crc kubenswrapper[4789]: I1124 12:15:00.165231 4789 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 12:15:00 crc kubenswrapper[4789]: I1124 12:15:00.169915 4789 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 12:15:00 crc kubenswrapper[4789]: I1124 12:15:00.222727 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399775-kdjrk"] Nov 24 12:15:00 crc kubenswrapper[4789]: I1124 12:15:00.258052 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/358e79ff-c67e-4898-bd94-88df0af14fb5-config-volume\") pod \"collect-profiles-29399775-kdjrk\" (UID: \"358e79ff-c67e-4898-bd94-88df0af14fb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-kdjrk" Nov 24 12:15:00 crc kubenswrapper[4789]: I1124 12:15:00.258154 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9dqm\" (UniqueName: \"kubernetes.io/projected/358e79ff-c67e-4898-bd94-88df0af14fb5-kube-api-access-d9dqm\") pod \"collect-profiles-29399775-kdjrk\" (UID: \"358e79ff-c67e-4898-bd94-88df0af14fb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-kdjrk" Nov 24 12:15:00 crc kubenswrapper[4789]: I1124 12:15:00.258358 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/358e79ff-c67e-4898-bd94-88df0af14fb5-secret-volume\") pod \"collect-profiles-29399775-kdjrk\" (UID: \"358e79ff-c67e-4898-bd94-88df0af14fb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-kdjrk" Nov 24 12:15:00 crc kubenswrapper[4789]: I1124 12:15:00.360016 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/358e79ff-c67e-4898-bd94-88df0af14fb5-config-volume\") pod \"collect-profiles-29399775-kdjrk\" (UID: \"358e79ff-c67e-4898-bd94-88df0af14fb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-kdjrk" Nov 24 12:15:00 crc kubenswrapper[4789]: I1124 12:15:00.360103 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9dqm\" (UniqueName: \"kubernetes.io/projected/358e79ff-c67e-4898-bd94-88df0af14fb5-kube-api-access-d9dqm\") pod \"collect-profiles-29399775-kdjrk\" (UID: \"358e79ff-c67e-4898-bd94-88df0af14fb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-kdjrk" Nov 24 12:15:00 crc kubenswrapper[4789]: I1124 12:15:00.360174 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/358e79ff-c67e-4898-bd94-88df0af14fb5-secret-volume\") pod \"collect-profiles-29399775-kdjrk\" (UID: \"358e79ff-c67e-4898-bd94-88df0af14fb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-kdjrk" Nov 24 12:15:00 crc kubenswrapper[4789]: I1124 12:15:00.361902 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/358e79ff-c67e-4898-bd94-88df0af14fb5-config-volume\") pod \"collect-profiles-29399775-kdjrk\" (UID: \"358e79ff-c67e-4898-bd94-88df0af14fb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-kdjrk" Nov 24 12:15:00 crc kubenswrapper[4789]: I1124 12:15:00.372956 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/358e79ff-c67e-4898-bd94-88df0af14fb5-secret-volume\") pod \"collect-profiles-29399775-kdjrk\" (UID: \"358e79ff-c67e-4898-bd94-88df0af14fb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-kdjrk" Nov 24 12:15:00 crc kubenswrapper[4789]: I1124 12:15:00.387488 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9dqm\" (UniqueName: \"kubernetes.io/projected/358e79ff-c67e-4898-bd94-88df0af14fb5-kube-api-access-d9dqm\") pod \"collect-profiles-29399775-kdjrk\" (UID: \"358e79ff-c67e-4898-bd94-88df0af14fb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-kdjrk" Nov 24 12:15:00 crc kubenswrapper[4789]: I1124 12:15:00.476571 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-kdjrk" Nov 24 12:15:00 crc kubenswrapper[4789]: I1124 12:15:00.956868 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399775-kdjrk"] Nov 24 12:15:01 crc kubenswrapper[4789]: I1124 12:15:01.361897 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-kdjrk" event={"ID":"358e79ff-c67e-4898-bd94-88df0af14fb5","Type":"ContainerStarted","Data":"187f60c7d73aadc6bf3f99ab810061606d7d62cf084d2cf1be243abaa12caecd"} Nov 24 12:15:01 crc kubenswrapper[4789]: I1124 12:15:01.362962 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-kdjrk" event={"ID":"358e79ff-c67e-4898-bd94-88df0af14fb5","Type":"ContainerStarted","Data":"7f5178a18c48661fa0e9bd3b734b5ea4374fb18022a9aad64046999825c70dc7"} Nov 24 12:15:01 crc kubenswrapper[4789]: I1124 12:15:01.384009 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-kdjrk" podStartSLOduration=1.383985999 podStartE2EDuration="1.383985999s" podCreationTimestamp="2025-11-24 12:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:15:01.37717351 +0000 UTC m=+2683.959644899" watchObservedRunningTime="2025-11-24 12:15:01.383985999 +0000 UTC m=+2683.966457378" Nov 24 12:15:02 crc kubenswrapper[4789]: I1124 12:15:02.370856 4789 generic.go:334] "Generic (PLEG): container finished" podID="358e79ff-c67e-4898-bd94-88df0af14fb5" containerID="187f60c7d73aadc6bf3f99ab810061606d7d62cf084d2cf1be243abaa12caecd" exitCode=0 Nov 24 12:15:02 crc kubenswrapper[4789]: I1124 12:15:02.370901 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-kdjrk" event={"ID":"358e79ff-c67e-4898-bd94-88df0af14fb5","Type":"ContainerDied","Data":"187f60c7d73aadc6bf3f99ab810061606d7d62cf084d2cf1be243abaa12caecd"} Nov 24 12:15:03 crc kubenswrapper[4789]: I1124 12:15:03.696445 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-kdjrk" Nov 24 12:15:03 crc kubenswrapper[4789]: I1124 12:15:03.831187 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9dqm\" (UniqueName: \"kubernetes.io/projected/358e79ff-c67e-4898-bd94-88df0af14fb5-kube-api-access-d9dqm\") pod \"358e79ff-c67e-4898-bd94-88df0af14fb5\" (UID: \"358e79ff-c67e-4898-bd94-88df0af14fb5\") " Nov 24 12:15:03 crc kubenswrapper[4789]: I1124 12:15:03.831361 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/358e79ff-c67e-4898-bd94-88df0af14fb5-config-volume\") pod \"358e79ff-c67e-4898-bd94-88df0af14fb5\" (UID: \"358e79ff-c67e-4898-bd94-88df0af14fb5\") " Nov 24 12:15:03 crc kubenswrapper[4789]: I1124 12:15:03.831517 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/358e79ff-c67e-4898-bd94-88df0af14fb5-secret-volume\") pod \"358e79ff-c67e-4898-bd94-88df0af14fb5\" (UID: \"358e79ff-c67e-4898-bd94-88df0af14fb5\") " Nov 24 12:15:03 crc kubenswrapper[4789]: I1124 12:15:03.832059 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/358e79ff-c67e-4898-bd94-88df0af14fb5-config-volume" (OuterVolumeSpecName: "config-volume") pod "358e79ff-c67e-4898-bd94-88df0af14fb5" (UID: "358e79ff-c67e-4898-bd94-88df0af14fb5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:15:03 crc kubenswrapper[4789]: I1124 12:15:03.837292 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/358e79ff-c67e-4898-bd94-88df0af14fb5-kube-api-access-d9dqm" (OuterVolumeSpecName: "kube-api-access-d9dqm") pod "358e79ff-c67e-4898-bd94-88df0af14fb5" (UID: "358e79ff-c67e-4898-bd94-88df0af14fb5"). InnerVolumeSpecName "kube-api-access-d9dqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:15:03 crc kubenswrapper[4789]: I1124 12:15:03.844662 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/358e79ff-c67e-4898-bd94-88df0af14fb5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "358e79ff-c67e-4898-bd94-88df0af14fb5" (UID: "358e79ff-c67e-4898-bd94-88df0af14fb5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:15:03 crc kubenswrapper[4789]: I1124 12:15:03.933646 4789 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/358e79ff-c67e-4898-bd94-88df0af14fb5-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:03 crc kubenswrapper[4789]: I1124 12:15:03.933957 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9dqm\" (UniqueName: \"kubernetes.io/projected/358e79ff-c67e-4898-bd94-88df0af14fb5-kube-api-access-d9dqm\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:03 crc kubenswrapper[4789]: I1124 12:15:03.934294 4789 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/358e79ff-c67e-4898-bd94-88df0af14fb5-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:04 crc kubenswrapper[4789]: I1124 12:15:04.386079 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-kdjrk" event={"ID":"358e79ff-c67e-4898-bd94-88df0af14fb5","Type":"ContainerDied","Data":"7f5178a18c48661fa0e9bd3b734b5ea4374fb18022a9aad64046999825c70dc7"} Nov 24 12:15:04 crc kubenswrapper[4789]: I1124 12:15:04.386131 4789 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f5178a18c48661fa0e9bd3b734b5ea4374fb18022a9aad64046999825c70dc7" Nov 24 12:15:04 crc kubenswrapper[4789]: I1124 12:15:04.386200 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-kdjrk" Nov 24 12:15:04 crc kubenswrapper[4789]: I1124 12:15:04.454376 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399730-77vnb"] Nov 24 12:15:04 crc kubenswrapper[4789]: I1124 12:15:04.464274 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399730-77vnb"] Nov 24 12:15:06 crc kubenswrapper[4789]: I1124 12:15:06.190223 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c51acce1-f5f7-44d8-aadf-ae468cf2e29b" path="/var/lib/kubelet/pods/c51acce1-f5f7-44d8-aadf-ae468cf2e29b/volumes" Nov 24 12:15:19 crc kubenswrapper[4789]: I1124 12:15:19.143337 4789 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-59vrl"] Nov 24 12:15:19 crc kubenswrapper[4789]: E1124 12:15:19.144288 4789 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358e79ff-c67e-4898-bd94-88df0af14fb5" containerName="collect-profiles" Nov 24 12:15:19 crc kubenswrapper[4789]: I1124 12:15:19.144305 4789 state_mem.go:107] "Deleted CPUSet assignment" podUID="358e79ff-c67e-4898-bd94-88df0af14fb5" containerName="collect-profiles" Nov 24 12:15:19 crc kubenswrapper[4789]: I1124 12:15:19.144550 4789 memory_manager.go:354] "RemoveStaleState removing state" podUID="358e79ff-c67e-4898-bd94-88df0af14fb5" containerName="collect-profiles" Nov 24 12:15:19 crc kubenswrapper[4789]: I1124 12:15:19.146035 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-59vrl" Nov 24 12:15:19 crc kubenswrapper[4789]: I1124 12:15:19.149862 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9gc6\" (UniqueName: \"kubernetes.io/projected/8427ca80-ceff-4c79-9a83-3aebadb74250-kube-api-access-z9gc6\") pod \"redhat-marketplace-59vrl\" (UID: \"8427ca80-ceff-4c79-9a83-3aebadb74250\") " pod="openshift-marketplace/redhat-marketplace-59vrl" Nov 24 12:15:19 crc kubenswrapper[4789]: I1124 12:15:19.150189 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8427ca80-ceff-4c79-9a83-3aebadb74250-catalog-content\") pod \"redhat-marketplace-59vrl\" (UID: \"8427ca80-ceff-4c79-9a83-3aebadb74250\") " pod="openshift-marketplace/redhat-marketplace-59vrl" Nov 24 12:15:19 crc kubenswrapper[4789]: I1124 12:15:19.150322 4789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8427ca80-ceff-4c79-9a83-3aebadb74250-utilities\") pod \"redhat-marketplace-59vrl\" (UID: \"8427ca80-ceff-4c79-9a83-3aebadb74250\") " pod="openshift-marketplace/redhat-marketplace-59vrl" Nov 24 12:15:19 crc kubenswrapper[4789]: I1124 12:15:19.163088 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-59vrl"] Nov 24 12:15:19 crc kubenswrapper[4789]: I1124 12:15:19.251789 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8427ca80-ceff-4c79-9a83-3aebadb74250-catalog-content\") pod \"redhat-marketplace-59vrl\" (UID: \"8427ca80-ceff-4c79-9a83-3aebadb74250\") " pod="openshift-marketplace/redhat-marketplace-59vrl" Nov 24 12:15:19 crc kubenswrapper[4789]: I1124 12:15:19.251860 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8427ca80-ceff-4c79-9a83-3aebadb74250-utilities\") pod \"redhat-marketplace-59vrl\" (UID: \"8427ca80-ceff-4c79-9a83-3aebadb74250\") " pod="openshift-marketplace/redhat-marketplace-59vrl" Nov 24 12:15:19 crc kubenswrapper[4789]: I1124 12:15:19.251934 4789 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9gc6\" (UniqueName: \"kubernetes.io/projected/8427ca80-ceff-4c79-9a83-3aebadb74250-kube-api-access-z9gc6\") pod \"redhat-marketplace-59vrl\" (UID: \"8427ca80-ceff-4c79-9a83-3aebadb74250\") " pod="openshift-marketplace/redhat-marketplace-59vrl" Nov 24 12:15:19 crc kubenswrapper[4789]: I1124 12:15:19.252705 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8427ca80-ceff-4c79-9a83-3aebadb74250-catalog-content\") pod \"redhat-marketplace-59vrl\" (UID: \"8427ca80-ceff-4c79-9a83-3aebadb74250\") " pod="openshift-marketplace/redhat-marketplace-59vrl" Nov 24 12:15:19 crc kubenswrapper[4789]: I1124 12:15:19.253167 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8427ca80-ceff-4c79-9a83-3aebadb74250-utilities\") pod \"redhat-marketplace-59vrl\" (UID: \"8427ca80-ceff-4c79-9a83-3aebadb74250\") " pod="openshift-marketplace/redhat-marketplace-59vrl" Nov 24 12:15:19 crc kubenswrapper[4789]: I1124 12:15:19.276669 4789 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9gc6\" (UniqueName: \"kubernetes.io/projected/8427ca80-ceff-4c79-9a83-3aebadb74250-kube-api-access-z9gc6\") pod \"redhat-marketplace-59vrl\" (UID: \"8427ca80-ceff-4c79-9a83-3aebadb74250\") " pod="openshift-marketplace/redhat-marketplace-59vrl" Nov 24 12:15:19 crc kubenswrapper[4789]: I1124 12:15:19.468935 4789 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-59vrl" Nov 24 12:15:19 crc kubenswrapper[4789]: I1124 12:15:19.994401 4789 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-59vrl"] Nov 24 12:15:20 crc kubenswrapper[4789]: I1124 12:15:20.526898 4789 generic.go:334] "Generic (PLEG): container finished" podID="8427ca80-ceff-4c79-9a83-3aebadb74250" containerID="129b448cd80f9cab3fc4882aadde21cea4df395adc8e06cd138834957d118cb9" exitCode=0 Nov 24 12:15:20 crc kubenswrapper[4789]: I1124 12:15:20.526964 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-59vrl" event={"ID":"8427ca80-ceff-4c79-9a83-3aebadb74250","Type":"ContainerDied","Data":"129b448cd80f9cab3fc4882aadde21cea4df395adc8e06cd138834957d118cb9"} Nov 24 12:15:20 crc kubenswrapper[4789]: I1124 12:15:20.527322 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-59vrl" event={"ID":"8427ca80-ceff-4c79-9a83-3aebadb74250","Type":"ContainerStarted","Data":"8efdc0b5e20bba33e7421315c06597d1294aba6cc134f3677d2a702630b26167"} Nov 24 12:15:20 crc kubenswrapper[4789]: I1124 12:15:20.529846 4789 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 12:15:21 crc kubenswrapper[4789]: I1124 12:15:21.537038 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-59vrl" event={"ID":"8427ca80-ceff-4c79-9a83-3aebadb74250","Type":"ContainerStarted","Data":"cc5fa69c07a2fe15afc24abc35eeaa51950152bd69820688963124e45fd77a19"} Nov 24 12:15:22 crc kubenswrapper[4789]: I1124 12:15:22.547064 4789 generic.go:334] "Generic (PLEG): container finished" podID="8427ca80-ceff-4c79-9a83-3aebadb74250" containerID="cc5fa69c07a2fe15afc24abc35eeaa51950152bd69820688963124e45fd77a19" exitCode=0 Nov 24 12:15:22 crc kubenswrapper[4789]: I1124 12:15:22.547097 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-59vrl" event={"ID":"8427ca80-ceff-4c79-9a83-3aebadb74250","Type":"ContainerDied","Data":"cc5fa69c07a2fe15afc24abc35eeaa51950152bd69820688963124e45fd77a19"} Nov 24 12:15:23 crc kubenswrapper[4789]: I1124 12:15:23.556527 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-59vrl" event={"ID":"8427ca80-ceff-4c79-9a83-3aebadb74250","Type":"ContainerStarted","Data":"1f27be337e3433a969bdcaad92e04fcb6e9ec9eadfdad797f96a2011959f8c07"} Nov 24 12:15:23 crc kubenswrapper[4789]: I1124 12:15:23.576174 4789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-59vrl" podStartSLOduration=1.771185199 podStartE2EDuration="4.576154796s" podCreationTimestamp="2025-11-24 12:15:19 +0000 UTC" firstStartedPulling="2025-11-24 12:15:20.529586478 +0000 UTC m=+2703.112057857" lastFinishedPulling="2025-11-24 12:15:23.334556075 +0000 UTC m=+2705.917027454" observedRunningTime="2025-11-24 12:15:23.574515816 +0000 UTC m=+2706.156987215" watchObservedRunningTime="2025-11-24 12:15:23.576154796 +0000 UTC m=+2706.158626175" Nov 24 12:15:29 crc kubenswrapper[4789]: I1124 12:15:29.469915 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-59vrl" Nov 24 12:15:29 crc kubenswrapper[4789]: I1124 12:15:29.470501 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-59vrl" Nov 24 12:15:29 crc kubenswrapper[4789]: I1124 12:15:29.521216 4789 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-59vrl" Nov 24 12:15:29 crc kubenswrapper[4789]: I1124 12:15:29.646131 4789 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-59vrl" Nov 24 12:15:29 crc kubenswrapper[4789]: I1124 12:15:29.757451 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-59vrl"] Nov 24 12:15:31 crc kubenswrapper[4789]: I1124 12:15:31.618574 4789 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-59vrl" podUID="8427ca80-ceff-4c79-9a83-3aebadb74250" containerName="registry-server" containerID="cri-o://1f27be337e3433a969bdcaad92e04fcb6e9ec9eadfdad797f96a2011959f8c07" gracePeriod=2 Nov 24 12:15:32 crc kubenswrapper[4789]: I1124 12:15:32.085965 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-59vrl" Nov 24 12:15:32 crc kubenswrapper[4789]: I1124 12:15:32.205579 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9gc6\" (UniqueName: \"kubernetes.io/projected/8427ca80-ceff-4c79-9a83-3aebadb74250-kube-api-access-z9gc6\") pod \"8427ca80-ceff-4c79-9a83-3aebadb74250\" (UID: \"8427ca80-ceff-4c79-9a83-3aebadb74250\") " Nov 24 12:15:32 crc kubenswrapper[4789]: I1124 12:15:32.205627 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8427ca80-ceff-4c79-9a83-3aebadb74250-catalog-content\") pod \"8427ca80-ceff-4c79-9a83-3aebadb74250\" (UID: \"8427ca80-ceff-4c79-9a83-3aebadb74250\") " Nov 24 12:15:32 crc kubenswrapper[4789]: I1124 12:15:32.206012 4789 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8427ca80-ceff-4c79-9a83-3aebadb74250-utilities\") pod \"8427ca80-ceff-4c79-9a83-3aebadb74250\" (UID: \"8427ca80-ceff-4c79-9a83-3aebadb74250\") " Nov 24 12:15:32 crc kubenswrapper[4789]: I1124 12:15:32.206616 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8427ca80-ceff-4c79-9a83-3aebadb74250-utilities" (OuterVolumeSpecName: "utilities") pod "8427ca80-ceff-4c79-9a83-3aebadb74250" (UID: "8427ca80-ceff-4c79-9a83-3aebadb74250"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:15:32 crc kubenswrapper[4789]: I1124 12:15:32.207289 4789 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8427ca80-ceff-4c79-9a83-3aebadb74250-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:32 crc kubenswrapper[4789]: I1124 12:15:32.216751 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8427ca80-ceff-4c79-9a83-3aebadb74250-kube-api-access-z9gc6" (OuterVolumeSpecName: "kube-api-access-z9gc6") pod "8427ca80-ceff-4c79-9a83-3aebadb74250" (UID: "8427ca80-ceff-4c79-9a83-3aebadb74250"). InnerVolumeSpecName "kube-api-access-z9gc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:15:32 crc kubenswrapper[4789]: I1124 12:15:32.228652 4789 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8427ca80-ceff-4c79-9a83-3aebadb74250-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8427ca80-ceff-4c79-9a83-3aebadb74250" (UID: "8427ca80-ceff-4c79-9a83-3aebadb74250"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:15:32 crc kubenswrapper[4789]: I1124 12:15:32.308604 4789 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9gc6\" (UniqueName: \"kubernetes.io/projected/8427ca80-ceff-4c79-9a83-3aebadb74250-kube-api-access-z9gc6\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:32 crc kubenswrapper[4789]: I1124 12:15:32.308641 4789 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8427ca80-ceff-4c79-9a83-3aebadb74250-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:32 crc kubenswrapper[4789]: I1124 12:15:32.628621 4789 generic.go:334] "Generic (PLEG): container finished" podID="8427ca80-ceff-4c79-9a83-3aebadb74250" containerID="1f27be337e3433a969bdcaad92e04fcb6e9ec9eadfdad797f96a2011959f8c07" exitCode=0 Nov 24 12:15:32 crc kubenswrapper[4789]: I1124 12:15:32.628659 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-59vrl" event={"ID":"8427ca80-ceff-4c79-9a83-3aebadb74250","Type":"ContainerDied","Data":"1f27be337e3433a969bdcaad92e04fcb6e9ec9eadfdad797f96a2011959f8c07"} Nov 24 12:15:32 crc kubenswrapper[4789]: I1124 12:15:32.628694 4789 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-59vrl" Nov 24 12:15:32 crc kubenswrapper[4789]: I1124 12:15:32.628713 4789 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-59vrl" event={"ID":"8427ca80-ceff-4c79-9a83-3aebadb74250","Type":"ContainerDied","Data":"8efdc0b5e20bba33e7421315c06597d1294aba6cc134f3677d2a702630b26167"} Nov 24 12:15:32 crc kubenswrapper[4789]: I1124 12:15:32.628738 4789 scope.go:117] "RemoveContainer" containerID="1f27be337e3433a969bdcaad92e04fcb6e9ec9eadfdad797f96a2011959f8c07" Nov 24 12:15:32 crc kubenswrapper[4789]: I1124 12:15:32.648555 4789 scope.go:117] "RemoveContainer" containerID="cc5fa69c07a2fe15afc24abc35eeaa51950152bd69820688963124e45fd77a19" Nov 24 12:15:32 crc kubenswrapper[4789]: I1124 12:15:32.681056 4789 scope.go:117] "RemoveContainer" containerID="129b448cd80f9cab3fc4882aadde21cea4df395adc8e06cd138834957d118cb9" Nov 24 12:15:32 crc kubenswrapper[4789]: I1124 12:15:32.758509 4789 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-59vrl"] Nov 24 12:15:32 crc kubenswrapper[4789]: I1124 12:15:32.779932 4789 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-59vrl"] Nov 24 12:15:32 crc kubenswrapper[4789]: I1124 12:15:32.786810 4789 scope.go:117] "RemoveContainer" containerID="1f27be337e3433a969bdcaad92e04fcb6e9ec9eadfdad797f96a2011959f8c07" Nov 24 12:15:32 crc kubenswrapper[4789]: E1124 12:15:32.790600 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f27be337e3433a969bdcaad92e04fcb6e9ec9eadfdad797f96a2011959f8c07\": container with ID starting with 1f27be337e3433a969bdcaad92e04fcb6e9ec9eadfdad797f96a2011959f8c07 not found: ID does not exist" containerID="1f27be337e3433a969bdcaad92e04fcb6e9ec9eadfdad797f96a2011959f8c07" Nov 24 12:15:32 crc kubenswrapper[4789]: I1124 12:15:32.790645 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f27be337e3433a969bdcaad92e04fcb6e9ec9eadfdad797f96a2011959f8c07"} err="failed to get container status \"1f27be337e3433a969bdcaad92e04fcb6e9ec9eadfdad797f96a2011959f8c07\": rpc error: code = NotFound desc = could not find container \"1f27be337e3433a969bdcaad92e04fcb6e9ec9eadfdad797f96a2011959f8c07\": container with ID starting with 1f27be337e3433a969bdcaad92e04fcb6e9ec9eadfdad797f96a2011959f8c07 not found: ID does not exist" Nov 24 12:15:32 crc kubenswrapper[4789]: I1124 12:15:32.790677 4789 scope.go:117] "RemoveContainer" containerID="cc5fa69c07a2fe15afc24abc35eeaa51950152bd69820688963124e45fd77a19" Nov 24 12:15:32 crc kubenswrapper[4789]: E1124 12:15:32.794623 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc5fa69c07a2fe15afc24abc35eeaa51950152bd69820688963124e45fd77a19\": container with ID starting with cc5fa69c07a2fe15afc24abc35eeaa51950152bd69820688963124e45fd77a19 not found: ID does not exist" containerID="cc5fa69c07a2fe15afc24abc35eeaa51950152bd69820688963124e45fd77a19" Nov 24 12:15:32 crc kubenswrapper[4789]: I1124 12:15:32.794678 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc5fa69c07a2fe15afc24abc35eeaa51950152bd69820688963124e45fd77a19"} err="failed to get container status \"cc5fa69c07a2fe15afc24abc35eeaa51950152bd69820688963124e45fd77a19\": rpc error: code = NotFound desc = could not find container \"cc5fa69c07a2fe15afc24abc35eeaa51950152bd69820688963124e45fd77a19\": container with ID starting with cc5fa69c07a2fe15afc24abc35eeaa51950152bd69820688963124e45fd77a19 not found: ID does not exist" Nov 24 12:15:32 crc kubenswrapper[4789]: I1124 12:15:32.794709 4789 scope.go:117] "RemoveContainer" containerID="129b448cd80f9cab3fc4882aadde21cea4df395adc8e06cd138834957d118cb9" Nov 24 12:15:32 crc kubenswrapper[4789]: E1124 12:15:32.795141 4789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"129b448cd80f9cab3fc4882aadde21cea4df395adc8e06cd138834957d118cb9\": container with ID starting with 129b448cd80f9cab3fc4882aadde21cea4df395adc8e06cd138834957d118cb9 not found: ID does not exist" containerID="129b448cd80f9cab3fc4882aadde21cea4df395adc8e06cd138834957d118cb9" Nov 24 12:15:32 crc kubenswrapper[4789]: I1124 12:15:32.795189 4789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"129b448cd80f9cab3fc4882aadde21cea4df395adc8e06cd138834957d118cb9"} err="failed to get container status \"129b448cd80f9cab3fc4882aadde21cea4df395adc8e06cd138834957d118cb9\": rpc error: code = NotFound desc = could not find container \"129b448cd80f9cab3fc4882aadde21cea4df395adc8e06cd138834957d118cb9\": container with ID starting with 129b448cd80f9cab3fc4882aadde21cea4df395adc8e06cd138834957d118cb9 not found: ID does not exist" Nov 24 12:15:34 crc kubenswrapper[4789]: I1124 12:15:34.180268 4789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8427ca80-ceff-4c79-9a83-3aebadb74250" path="/var/lib/kubelet/pods/8427ca80-ceff-4c79-9a83-3aebadb74250/volumes" Nov 24 12:15:34 crc kubenswrapper[4789]: I1124 12:15:34.348976 4789 scope.go:117] "RemoveContainer" containerID="4623592cea64378ecbebfdd646e0ed0cedeb82b45bc21203235fef69e62288f2"